task e2e-predictor has failed: "step-fail-if-needed" exited with code 1: Error [get-kubeconfig] Found kubeconfig secret: cluster-ggh76-admin-kubeconfig [get-kubeconfig] Wrote kubeconfig to /credentials/cluster-ggh76-kubeconfig [get-kubeconfig] Found admin password secret: cluster-ggh76-admin-password [get-kubeconfig] Retrieved username [get-kubeconfig] Wrote password to /credentials/cluster-ggh76-password [get-kubeconfig] API Server URL: https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443 [get-kubeconfig] Console URL: https://console-openshift-console.apps.2a448631-6bc9-4a08-a8c5-ea2dd5e9bcdd.prod.konfluxeaas.com [clone-repo] rhoaieng-56729 [clone-repo] https://github.com/VedantMahabaleshwarkar/kserve [clone-repo] Cloning into '/workspace/source'... [e2e-predictor] + bash [e2e-predictor] + STATUS_FILE=/test-status/deploy-and-e2e-status [e2e-predictor] + echo failed [e2e-predictor] + COMPONENT_NAME=kserve-agent-ci [e2e-predictor] ++ jq -r --arg component_name kserve-agent-ci '.[$component_name].image' [e2e-predictor] + export KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:486208697b32d7101f87cae10042a1b225619d82d48d9409f170b77fcd20bf86 [e2e-predictor] + KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:486208697b32d7101f87cae10042a1b225619d82d48d9409f170b77fcd20bf86 [e2e-predictor] + COMPONENT_NAME=kserve-controller-ci [e2e-predictor] ++ jq -r --arg component_name kserve-controller-ci '.[$component_name].image' [e2e-predictor] + export KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:57ec72ac62112ffd6ad47448edaa0c9470dfb7801e1d5983934b87b4f50ae318 [e2e-predictor] + KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:57ec72ac62112ffd6ad47448edaa0c9470dfb7801e1d5983934b87b4f50ae318 [e2e-predictor] + COMPONENT_NAME=kserve-router-ci [e2e-predictor] ++ jq -r --arg component_name kserve-router-ci '.[$component_name].image' [e2e-predictor] + export KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:e1809646698a3842a0e5eaa8946e942a9578ea8a29ca53c92795d28bf989366a [e2e-predictor] + KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:e1809646698a3842a0e5eaa8946e942a9578ea8a29ca53c92795d28bf989366a [e2e-predictor] + COMPONENT_NAME=kserve-storage-initializer-ci [e2e-predictor] ++ jq -r --arg component_name kserve-storage-initializer-ci '.[$component_name].image' [e2e-predictor] + export STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:ab345ae893fb206c60a40527aa847c5f874acd904c2a430c85bcc08c3bedb947 [e2e-predictor] + STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:ab345ae893fb206c60a40527aa847c5f874acd904c2a430c85bcc08c3bedb947 [e2e-predictor] + ./test/scripts/openshift-ci/run-e2e-tests.sh 'predictor or kserve_on_openshift' [e2e-predictor] Installing on cluster [e2e-predictor] Using namespace: kserve for KServe components [e2e-predictor] SKLEARN_IMAGE=quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404 [e2e-predictor] KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:57ec72ac62112ffd6ad47448edaa0c9470dfb7801e1d5983934b87b4f50ae318 [e2e-predictor] LLMISVC_CONTROLLER_IMAGE=ghcr.io/opendatahub-io/kserve/odh-kserve-llmisvc-controller:release-v0.17 [e2e-predictor] KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:486208697b32d7101f87cae10042a1b225619d82d48d9409f170b77fcd20bf86 [e2e-predictor] KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:e1809646698a3842a0e5eaa8946e942a9578ea8a29ca53c92795d28bf989366a [e2e-predictor] STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:ab345ae893fb206c60a40527aa847c5f874acd904c2a430c85bcc08c3bedb947 [e2e-predictor] ERROR_404_ISVC_IMAGE=quay.io/opendatahub/error-404-isvc:odh-pr-1404 [e2e-predictor] SUCCESS_200_ISVC_IMAGE=quay.io/opendatahub/success-200-isvc:odh-pr-1404 [e2e-predictor] [INFO] Installing Kustomize v5.8.1 for linux/amd64... [e2e-predictor] [SUCCESS] Successfully installed Kustomize v5.8.1 to /workspace/source/bin/kustomize [e2e-predictor] v5.8.1 [e2e-predictor] make: Entering directory '/workspace/source' [e2e-predictor] [INFO] Installing yq v4.52.1 for linux/amd64... [e2e-predictor] [SUCCESS] Successfully installed yq v4.52.1 to /workspace/source/bin/yq [e2e-predictor] yq (https://github.com/mikefarah/yq/) version v4.52.1 [e2e-predictor] make: Leaving directory '/workspace/source' [e2e-predictor] Installing KServe Python SDK ... [e2e-predictor] [INFO] Installing uv 0.7.8 for linux/amd64... [e2e-predictor] [SUCCESS] Successfully installed uv 0.7.8 to /workspace/source/bin/uv [e2e-predictor] warning: Failed to read project metadata (No `pyproject.toml` found in current directory or any parent directory). Running `uv self version` for compatibility. This fallback will be removed in the future; pass `--preview` to force an error. [e2e-predictor] uv 0.7.8 [e2e-predictor] Creating virtual environment... [e2e-predictor] warning: virtualenv's `--clear` has no effect (uv always clears the virtual environment) [e2e-predictor] Using CPython 3.9.25 interpreter at: /usr/bin/python3 [e2e-predictor] Creating virtual environment at: .venv [e2e-predictor] /workspace/source [e2e-predictor] Using CPython 3.11.13 interpreter at: /usr/bin/python3.11 [e2e-predictor] Creating virtual environment at: .venv [e2e-predictor] Resolved 263 packages in 1ms [e2e-predictor] Building kserve @ file:///workspace/source/python/kserve [e2e-predictor] Downloading pandas (12.5MiB) [e2e-predictor] Downloading pydantic-core (2.0MiB) [e2e-predictor] Downloading kubernetes (1.9MiB) [e2e-predictor] Downloading uvloop (3.8MiB) [e2e-predictor] Downloading grpcio-tools (2.5MiB) [e2e-predictor] Downloading setuptools (1.2MiB) [e2e-predictor] Downloading cryptography (4.3MiB) [e2e-predictor] Downloading botocore (12.9MiB) [e2e-predictor] Downloading mypy (17.2MiB) [e2e-predictor] Downloading aiohttp (1.7MiB) [e2e-predictor] Downloading pyarrow (40.1MiB) [e2e-predictor] Downloading grpcio (6.4MiB) [e2e-predictor] Downloading black (1.6MiB) [e2e-predictor] Downloading portforward (3.9MiB) [e2e-predictor] Downloading numpy (15.7MiB) [e2e-predictor] Building timeout-sampler==1.0.3 [e2e-predictor] Building python-simple-logger==2.0.19 [e2e-predictor] Downloading aiohttp [e2e-predictor] Downloading black [e2e-predictor] Downloading pydantic-core [e2e-predictor] Downloading grpcio-tools [e2e-predictor] Downloading setuptools [e2e-predictor] Downloading portforward [e2e-predictor] Downloading uvloop [e2e-predictor] Downloading cryptography [e2e-predictor] Built python-simple-logger==2.0.19 [e2e-predictor] Downloading grpcio [e2e-predictor] Downloading kubernetes [e2e-predictor] Built timeout-sampler==1.0.3 [e2e-predictor] Downloading numpy [e2e-predictor] Built kserve @ file:///workspace/source/python/kserve [e2e-predictor] Downloading pandas [e2e-predictor] Downloading botocore [e2e-predictor] Downloading mypy [e2e-predictor] Downloading pyarrow [e2e-predictor] Prepared 99 packages in 1.87s [e2e-predictor] warning: Failed to hardlink files; falling back to full copy. This may lead to degraded performance. [e2e-predictor] If the cache and target directories are on different filesystems, hardlinking may not be supported. [e2e-predictor] If this is intentional, set `export UV_LINK_MODE=copy` or use `--link-mode=copy` to suppress this warning. [e2e-predictor] Installed 99 packages in 276ms [e2e-predictor] + aiohappyeyeballs==2.6.1 [e2e-predictor] + aiohttp==3.13.3 [e2e-predictor] + aiosignal==1.4.0 [e2e-predictor] + annotated-doc==0.0.4 [e2e-predictor] + annotated-types==0.7.0 [e2e-predictor] + anyio==4.9.0 [e2e-predictor] + attrs==25.3.0 [e2e-predictor] + avro==1.12.0 [e2e-predictor] + black==24.3.0 [e2e-predictor] + boto3==1.37.35 [e2e-predictor] + botocore==1.37.35 [e2e-predictor] + cachetools==5.5.2 [e2e-predictor] + certifi==2025.1.31 [e2e-predictor] + cffi==2.0.0 [e2e-predictor] + charset-normalizer==3.4.1 [e2e-predictor] + click==8.1.8 [e2e-predictor] + cloudevents==1.11.0 [e2e-predictor] + colorama==0.4.6 [e2e-predictor] + colorlog==6.10.1 [e2e-predictor] + coverage==7.8.0 [e2e-predictor] + cryptography==46.0.5 [e2e-predictor] + deprecation==2.1.0 [e2e-predictor] + durationpy==0.9 [e2e-predictor] + execnet==2.1.1 [e2e-predictor] + fastapi==0.121.3 [e2e-predictor] + frozenlist==1.5.0 [e2e-predictor] + google-auth==2.39.0 [e2e-predictor] + grpc-interceptor==0.15.4 [e2e-predictor] + grpcio==1.78.1 [e2e-predictor] + grpcio-testing==1.78.1 [e2e-predictor] + grpcio-tools==1.78.1 [e2e-predictor] + h11==0.16.0 [e2e-predictor] + httpcore==1.0.9 [e2e-predictor] + httptools==0.6.4 [e2e-predictor] + httpx==0.27.2 [e2e-predictor] + httpx-retries==0.4.5 [e2e-predictor] + idna==3.10 [e2e-predictor] + iniconfig==2.1.0 [e2e-predictor] + jinja2==3.1.6 [e2e-predictor] + jmespath==1.0.1 [e2e-predictor] + kserve==0.17.0 (from file:///workspace/source/python/kserve) [e2e-predictor] + kubernetes==32.0.1 [e2e-predictor] + markupsafe==3.0.2 [e2e-predictor] + multidict==6.4.3 [e2e-predictor] + mypy==0.991 [e2e-predictor] + mypy-extensions==1.0.0 [e2e-predictor] + numpy==2.2.4 [e2e-predictor] + oauthlib==3.2.2 [e2e-predictor] + orjson==3.10.16 [e2e-predictor] + packaging==24.2 [e2e-predictor] + pandas==2.2.3 [e2e-predictor] + pathspec==0.12.1 [e2e-predictor] + platformdirs==4.3.7 [e2e-predictor] + pluggy==1.5.0 [e2e-predictor] + portforward==0.7.1 [e2e-predictor] + prometheus-client==0.21.1 [e2e-predictor] + propcache==0.3.1 [e2e-predictor] + protobuf==6.33.5 [e2e-predictor] + psutil==5.9.8 [e2e-predictor] + pyarrow==19.0.1 [e2e-predictor] + pyasn1==0.6.1 [e2e-predictor] + pyasn1-modules==0.4.2 [e2e-predictor] + pycparser==2.22 [e2e-predictor] + pydantic==2.12.4 [e2e-predictor] + pydantic-core==2.41.5 [e2e-predictor] + pyjwt==2.12.1 [e2e-predictor] + pytest==7.4.4 [e2e-predictor] + pytest-asyncio==0.23.8 [e2e-predictor] + pytest-cov==5.0.0 [e2e-predictor] + pytest-httpx==0.30.0 [e2e-predictor] + pytest-xdist==3.6.1 [e2e-predictor] + python-dateutil==2.9.0.post0 [e2e-predictor] + python-dotenv==1.1.0 [e2e-predictor] + python-multipart==0.0.22 [e2e-predictor] + python-simple-logger==2.0.19 [e2e-predictor] + pytz==2025.2 [e2e-predictor] + pyyaml==6.0.2 [e2e-predictor] + requests==2.32.3 [e2e-predictor] + requests-oauthlib==2.0.0 [e2e-predictor] + rsa==4.9.1 [e2e-predictor] + s3transfer==0.11.4 [e2e-predictor] + setuptools==78.1.0 [e2e-predictor] + six==1.17.0 [e2e-predictor] + sniffio==1.3.1 [e2e-predictor] + starlette==0.49.1 [e2e-predictor] + tabulate==0.9.0 [e2e-predictor] + timeout-sampler==1.0.3 [e2e-predictor] + timing-asgi==0.3.1 [e2e-predictor] + tomlkit==0.13.2 [e2e-predictor] + typing-extensions==4.15.0 [e2e-predictor] + typing-inspection==0.4.2 [e2e-predictor] + tzdata==2025.2 [e2e-predictor] + urllib3==2.6.2 [e2e-predictor] + uvicorn==0.34.1 [e2e-predictor] + uvloop==0.21.0 [e2e-predictor] + watchfiles==1.0.5 [e2e-predictor] + websocket-client==1.8.0 [e2e-predictor] + websockets==15.0.1 [e2e-predictor] + yarl==1.20.0 [e2e-predictor] Audited 1 package in 49ms [e2e-predictor] /workspace/source [e2e-predictor] Creating namespace openshift-keda... [e2e-predictor] namespace/openshift-keda created [e2e-predictor] Namespace openshift-keda created/ensured. [e2e-predictor] --- [e2e-predictor] Creating OperatorGroup openshift-keda... [e2e-predictor] operatorgroup.operators.coreos.com/openshift-keda created [e2e-predictor] OperatorGroup openshift-keda created/ensured. [e2e-predictor] --- [e2e-predictor] Creating Subscription for openshift-custom-metrics-autoscaler-operator... [e2e-predictor] subscription.operators.coreos.com/openshift-custom-metrics-autoscaler-operator created [e2e-predictor] Subscription openshift-custom-metrics-autoscaler-operator created/ensured. [e2e-predictor] --- [e2e-predictor] Waiting for openshift-custom-metrics-autoscaler-operator CSV to become ready... [e2e-predictor] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (0/600) [e2e-predictor] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (5/600) [e2e-predictor] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (10/600) [e2e-predictor] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (15/600) [e2e-predictor] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (20/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (25/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (30/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (35/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (40/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (45/600) [e2e-predictor] CSV custom-metrics-autoscaler.v2.18.1-2 is ready (Phase: Succeeded). [e2e-predictor] --- [e2e-predictor] Applying KedaController custom resource... [e2e-predictor] Warning: resource kedacontrollers/keda is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. [e2e-predictor] kedacontroller.keda.sh/keda configured [e2e-predictor] KedaController custom resource applied. [e2e-predictor] --- [e2e-predictor] Allowing time for KEDA components to be provisioned by the operator ... [e2e-predictor] Waiting for KEDA Operator pod (selector: "app=keda-operator") to be ready in namespace openshift-keda... [e2e-predictor] Waiting for pod -l "app=keda-operator" in namespace "openshift-keda" to be created... [e2e-predictor] Pod -l "app=keda-operator" in namespace "openshift-keda" found. [e2e-predictor] Current pods for -l "app=keda-operator" in namespace "openshift-keda": [e2e-predictor] NAME READY STATUS RESTARTS AGE [e2e-predictor] keda-operator-ffbb595cb-4dj9m 1/1 Running 0 42s [e2e-predictor] Waiting up to 120s for pod(s) -l "app=keda-operator" in namespace "openshift-keda" to become ready... [e2e-predictor] pod/keda-operator-ffbb595cb-4dj9m condition met [e2e-predictor] Pod(s) -l "app=keda-operator" in namespace "openshift-keda" are ready. [e2e-predictor] KEDA Operator pod is ready. [e2e-predictor] Waiting for KEDA Metrics API Server pod (selector: "app=keda-metrics-apiserver") to be ready in namespace openshift-keda... [e2e-predictor] Waiting for pod -l "app=keda-metrics-apiserver" in namespace "openshift-keda" to be created... [e2e-predictor] Pod -l "app=keda-metrics-apiserver" in namespace "openshift-keda" found. [e2e-predictor] Current pods for -l "app=keda-metrics-apiserver" in namespace "openshift-keda": [e2e-predictor] NAME READY STATUS RESTARTS AGE [e2e-predictor] keda-metrics-apiserver-7c9f485588-6skvl 1/1 Running 0 47s [e2e-predictor] Waiting up to 120s for pod(s) -l "app=keda-metrics-apiserver" in namespace "openshift-keda" to become ready... [e2e-predictor] pod/keda-metrics-apiserver-7c9f485588-6skvl condition met [e2e-predictor] Pod(s) -l "app=keda-metrics-apiserver" in namespace "openshift-keda" are ready. [e2e-predictor] KEDA Metrics API Server pod is ready. [e2e-predictor] Waiting for KEDA Webhook pod (selector: "app=keda-admission-webhooks") to be ready in namespace openshift-keda... [e2e-predictor] Waiting for pod -l "app=keda-admission-webhooks" in namespace "openshift-keda" to be created... [e2e-predictor] Pod -l "app=keda-admission-webhooks" in namespace "openshift-keda" found. [e2e-predictor] Current pods for -l "app=keda-admission-webhooks" in namespace "openshift-keda": [e2e-predictor] NAME READY STATUS RESTARTS AGE [e2e-predictor] keda-admission-cf49989db-fjrqd 1/1 Running 0 53s [e2e-predictor] Waiting up to 120s for pod(s) -l "app=keda-admission-webhooks" in namespace "openshift-keda" to become ready... [e2e-predictor] pod/keda-admission-cf49989db-fjrqd condition met [e2e-predictor] Pod(s) -l "app=keda-admission-webhooks" in namespace "openshift-keda" are ready. [e2e-predictor] KEDA Webhook pod is ready. [e2e-predictor] --- [e2e-predictor] ✅ KEDA deployment script finished successfully. [e2e-predictor] Now using project "kserve" on server "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443". [e2e-predictor] [e2e-predictor] You can add applications to this project with the 'new-app' command. For example, try: [e2e-predictor] [e2e-predictor] oc new-app rails-postgresql-example [e2e-predictor] [e2e-predictor] to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: [e2e-predictor] [e2e-predictor] kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname [e2e-predictor] [e2e-predictor] ⏳ Installing KServe with SeaweedFS [e2e-predictor] # Warning: 'commonLabels' is deprecated. Please use 'labels' instead. Run 'kustomize edit fix' to update your Kustomization automatically. [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/dscinitializations.dscinitialization.opendatahub.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencegraphs.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencemodelrewrites.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferenceobjectives.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepoolimports.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/llminferenceservices.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/servingruntimes.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/trainedmodels.serving.kserve.io serverside-applied [e2e-predictor] ⏳ Waiting for CRDs to be established [e2e-predictor] Waiting for CRD inferenceservices.serving.kserve.io to appear (timeout: 90s)… [e2e-predictor] CRD inferenceservices.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io condition met [e2e-predictor] Waiting for CRD llminferenceserviceconfigs.serving.kserve.io to appear (timeout: 90s)… [e2e-predictor] CRD llminferenceserviceconfigs.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io condition met [e2e-predictor] Waiting for CRD clusterstoragecontainers.serving.kserve.io to appear (timeout: 90s)… [e2e-predictor] CRD clusterstoragecontainers.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io condition met [e2e-predictor] Waiting for CRD datascienceclusters.datasciencecluster.opendatahub.io to appear (timeout: 90s)… [e2e-predictor] CRD datascienceclusters.datasciencecluster.opendatahub.io detected — waiting for it to become Established (timeout: 90s)… [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io condition met [e2e-predictor] ⏳ Applying all resources... [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/dscinitializations.dscinitialization.opendatahub.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencegraphs.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencemodelrewrites.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferenceobjectives.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepoolimports.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.x-k8s.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/llminferenceservices.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/servingruntimes.serving.kserve.io serverside-applied [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/trainedmodels.serving.kserve.io serverside-applied [e2e-predictor] serviceaccount/kserve-controller-manager serverside-applied [e2e-predictor] serviceaccount/llmisvc-controller-manager serverside-applied [e2e-predictor] role.rbac.authorization.k8s.io/kserve-leader-election-role serverside-applied [e2e-predictor] role.rbac.authorization.k8s.io/llmisvc-leader-election-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-admin serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-edit serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-llmisvc-distro-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-llmisvc-manager-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-manager-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-metrics-reader-cluster-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-proxy-role serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-view serverside-applied [e2e-predictor] clusterrole.rbac.authorization.k8s.io/openshift-ai-llminferenceservice-scc serverside-applied [e2e-predictor] rolebinding.rbac.authorization.k8s.io/kserve-leader-election-rolebinding serverside-applied [e2e-predictor] rolebinding.rbac.authorization.k8s.io/llmisvc-leader-election-rolebinding serverside-applied [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/kserve-llmisvc-distro-rolebinding serverside-applied [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/kserve-manager-rolebinding serverside-applied [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/kserve-proxy-rolebinding serverside-applied [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/llmisvc-manager-rolebinding serverside-applied [e2e-predictor] configmap/inferenceservice-config serverside-applied [e2e-predictor] configmap/kserve-parameters serverside-applied [e2e-predictor] secret/kserve-webhook-server-secret serverside-applied [e2e-predictor] secret/mlpipeline-s3-artifact serverside-applied [e2e-predictor] service/kserve-controller-manager-metrics-service serverside-applied [e2e-predictor] service/kserve-controller-manager-service serverside-applied [e2e-predictor] service/kserve-webhook-server-service serverside-applied [e2e-predictor] service/llmisvc-controller-manager-service serverside-applied [e2e-predictor] service/llmisvc-webhook-server-service serverside-applied [e2e-predictor] service/s3-service serverside-applied [e2e-predictor] deployment.apps/kserve-controller-manager serverside-applied [e2e-predictor] deployment.apps/llmisvc-controller-manager serverside-applied [e2e-predictor] deployment.apps/seaweedfs serverside-applied [e2e-predictor] networkpolicy.networking.k8s.io/kserve-controller-manager serverside-applied [e2e-predictor] securitycontextconstraints.security.openshift.io/openshift-ai-llminferenceservice-scc serverside-applied [e2e-predictor] clusterstoragecontainer.serving.kserve.io/default serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-template serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-worker-data-parallel serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-template serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-worker-data-parallel serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-router-route serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-scheduler serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-amd-rocm serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-ppc64le serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-s390x serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-x86 serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-intel-gaudi serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-nvidia-cuda serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-worker-data-parallel serverside-applied [e2e-predictor] mutatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/inferencegraph.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/llminferenceservice.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/llminferenceserviceconfig.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/servingruntime.serving.kserve.io serverside-applied [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/trainedmodel.serving.kserve.io serverside-applied [e2e-predictor] ⏳ Waiting for llmisvc-controller-manager to be ready... [e2e-predictor] Waiting for pod -l "control-plane=llmisvc-controller-manager" in namespace "kserve" to be created... [e2e-predictor] Pod -l "control-plane=llmisvc-controller-manager" in namespace "kserve" found. [e2e-predictor] Current pods for -l "control-plane=llmisvc-controller-manager" in namespace "kserve": [e2e-predictor] NAME READY STATUS RESTARTS AGE [e2e-predictor] llmisvc-controller-manager-68cc5db7c4-wz4h9 0/1 Running 0 6s [e2e-predictor] Waiting up to 600s for pod(s) -l "control-plane=llmisvc-controller-manager" in namespace "kserve" to become ready... [e2e-predictor] pod/llmisvc-controller-manager-68cc5db7c4-wz4h9 condition met [e2e-predictor] Pod(s) -l "control-plane=llmisvc-controller-manager" in namespace "kserve" are ready. [e2e-predictor] ⏳ Re-applying LLMInferenceServiceConfig resources with webhook validation... [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-decode-template is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-template serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-decode-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-worker-data-parallel serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-prefill-template is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-template serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-prefill-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-worker-data-parallel serverside-applied [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-router-route serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-scheduler is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-scheduler serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-template is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template serverside-applied [e2e-predictor] Warning: modifying well-known config kserve/kserve-config-llm-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-predictor] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-worker-data-parallel serverside-applied [e2e-predictor] Installing DSC/DSCI resources... [e2e-predictor] dscinitialization.dscinitialization.opendatahub.io/test-dsci created [e2e-predictor] datasciencecluster.datasciencecluster.opendatahub.io/test-dsc created [e2e-predictor] Patching ingress domain, markers: predictor or kserve_on_openshift [e2e-predictor] configmap/inferenceservice-config patched [e2e-predictor] pod "kserve-controller-manager-d9c56dd68-vvp99" deleted [e2e-predictor] datasciencecluster.datasciencecluster.opendatahub.io/test-dsc patched [e2e-predictor] waiting kserve-controller get ready... [e2e-predictor] pod/kserve-controller-manager-d9c56dd68-8l754 condition met [e2e-predictor] Installing ODH Model Controller manually with PR images [e2e-predictor] customresourcedefinition.apiextensions.k8s.io/accounts.nim.opendatahub.io created [e2e-predictor] serviceaccount/model-serving-api created [e2e-predictor] serviceaccount/odh-model-controller created [e2e-predictor] role.rbac.authorization.k8s.io/leader-election-role created [e2e-predictor] clusterrole.rbac.authorization.k8s.io/kserve-prometheus-k8s created [e2e-predictor] clusterrole.rbac.authorization.k8s.io/metrics-reader created [e2e-predictor] clusterrole.rbac.authorization.k8s.io/model-serving-api created [e2e-predictor] clusterrole.rbac.authorization.k8s.io/odh-model-controller-role created [e2e-predictor] clusterrole.rbac.authorization.k8s.io/proxy-role created [e2e-predictor] rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/model-serving-api created [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/odh-model-controller-rolebinding-opendatahub created [e2e-predictor] clusterrolebinding.rbac.authorization.k8s.io/proxy-rolebinding created [e2e-predictor] configmap/odh-model-controller-parameters created [e2e-predictor] service/model-serving-api created [e2e-predictor] service/odh-model-controller-metrics-service created [e2e-predictor] service/odh-model-controller-webhook-service created [e2e-predictor] deployment.apps/model-serving-api created [e2e-predictor] deployment.apps/odh-model-controller created [e2e-predictor] servicemonitor.monitoring.coreos.com/model-serving-api-metrics created [e2e-predictor] servicemonitor.monitoring.coreos.com/odh-model-controller-metrics-monitor created [e2e-predictor] template.template.openshift.io/guardrails-detector-huggingface-serving-template created [e2e-predictor] template.template.openshift.io/kserve-ovms created [e2e-predictor] template.template.openshift.io/mlserver-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-cpu-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-cpu-x86-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-cuda-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-gaudi-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-multinode-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-rocm-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-spyre-ppc64le-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-spyre-s390x-runtime-template created [e2e-predictor] template.template.openshift.io/vllm-spyre-x86-runtime-template created [e2e-predictor] mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating.odh-model-controller.opendatahub.io created [e2e-predictor] validatingwebhookconfiguration.admissionregistration.k8s.io/validating.odh-model-controller.opendatahub.io created [e2e-predictor] Waiting for deployment "odh-model-controller" rollout to finish: 0 of 1 updated replicas are available... [e2e-predictor] deployment "odh-model-controller" successfully rolled out [e2e-predictor] Add testing models to SeaweedFS S3 storage ... [e2e-predictor] Waiting for SeaweedFS deployment to be ready... [e2e-predictor] deployment "seaweedfs" successfully rolled out [e2e-predictor] S3 init job not completed, re-creating... [e2e-predictor] job.batch/s3-init created [e2e-predictor] Waiting for S3 init job to complete... [e2e-predictor] job.batch/s3-init condition met [e2e-predictor] Configuring SeaweedFS S3 TLS [e2e-predictor] Namespace kserve exists. [e2e-predictor] Cleaning up existing custom TLS SeaweedFS resources for idempotency... [e2e-predictor] secret/seaweedfs-tls-custom-artifact serverside-applied [e2e-predictor] service/seaweedfs-tls-custom-service serverside-applied [e2e-predictor] deployment.apps/seaweedfs-tls-custom serverside-applied [e2e-predictor] Waiting for seaweedfs-tls-custom deployment to be ready... [e2e-predictor] Waiting for deployment "seaweedfs-tls-custom" rollout to finish: 0 of 1 updated replicas are available... [e2e-predictor] deployment "seaweedfs-tls-custom" successfully rolled out [e2e-predictor] Configuring SeaweedFS S3 for TLS with custom certificate and adding models to storage [e2e-predictor] Generating Custom CA cert and secret [e2e-predictor] .+++++++++++++++++++++++++++++++++++++++++++++*..+......+.+++++++++++++++++++++++++++++++++++++++++++++*..........+........+...+.......+...+..+.............+.....+...............+....+..+...............+...+...+............+.....................+..................................+.....+...+...+....+.....+.+......+......+........+......+..........+......+..............+.+..+................+.....+.+........+............+.+..............+....+.................+...+............+.............+......+...........+.............+......+...+...+...+...........+.............+........+...+.+......+............+..+.............+.....+.+.....+.......+........+......+.+.......................+............+...+....+......+...........+...+.+.........+..................+...+.....+......+.........................+.....+...................+.....+.......+.........+........+++++ [e2e-predictor] ...+++++++++++++++++++++++++++++++++++++++++++++*.......+...+..+...+.......+......+.....+.......+..+...+.......+..+.........+.+++++++++++++++++++++++++++++++++++++++++++++*...+....................+.+.....+.............+...........+...............+.......+...+......+........+...+..................+....+......+...........+.+..+.......+...+..+.+..............+......+............+.+..+.......+......+......+........+......+....+.....+.............+..+.+.........+........+.........+.+............+..+......+.......+..+.........+...............+.......+.........+......+..+......+....+..+......+..........+...+..+..........+..+................+.....+..........+..+....+......+...+............+..............+.+........+......+................+..+.......+........+..........+.....+...+.......+++++ [e2e-predictor] ----- [e2e-predictor] .+...+............+.+...+.........+..+.........+.+......+............+...+...+......+...+.....+.+.....+++++++++++++++++++++++++++++++++++++++++++++*...+.........+..........+...+.........+.....+++++++++++++++++++++++++++++++++++++++++++++*................+.....+.........+......+......+...+...+...+....+......+.....+....+.....+...+.........+......+...+....+.....+......................+......+..+...+.........+.+.....+....+...+......+...........+...+.......+...+..+......+...+....+...............+...+.................+...+...+.+......+.....+....+...+........+...+......+...+.......+........+.+..+............+.+..+....+........+..................+...+.+.....+.........+....+...+.....+...+..........+.........+.....+.+.........+.................+.+..............+....+..........................+......+..........+..+............+..................+......+......+..........+........+...+...+.+...+...........+....+...........+....+.....+..........+..+...+.......+..............+..........+..............+.+......+..............+.....................+......+.............+.....+.+...........+...+....+...........+..........+.........+.................+....+..+.........+....+..............+.......+.........+..+..........+..+......+.......+..+......+....+........+.........+.+.....+.........+.+.........+...+...........................+...+.....+......+.+.........+...........+...............+....+.....+....+.................+.............+........+...+.+......+..+.+.................+.......+...+.....+..........+........+....+..+..................+.+......+...+...+..+............................+...+.....+....+...+........+....+...............+...+...+..+....+.........+............+..+...+.......+.....+.......+........+.+..+...+.........................+.....+..........+......+.....+...+...+...+.+.....................+.........+......+..+......+......+......+.......+...........+....+...+...+...........+....+.....+.+........+.......+...+......+.....+.+.....+....+...........+.....................+.......+........+............+.............+..................+.....+.......+.................+.......+......+...+...........+....+......+........+...+............+......................+...................................+.+.....+...+....+..+...+.+........................+...+...+..............+.......+......+...+..+...+.+...........+...+.+....................+...+...+....+........+.............+........+....+..+....+....................................+.....+......+...+......+...+...+....+..............+....+....................+.............+........+.........................+.........+...+...+...........+......+.............+.....+.+...+...................................................+..+.............+.........+........+.............+..+..........+..+.....................+......+.............+...........+....+..+...+..................+............+......+.+.........+............+...+..+.......+...+..+......+.......+...+.....+......+...+...+.......+...........+....+...+........+.......+...+.....+......+.+...+.................+......................+........+.......+..+......+....+........+...+...+.......+...+...........+.+.....+.........................+..............+..........+.........+..+.............+...........+...+............+.........+......+.+............+........+......................+..+...+..................+...+......+...+............+..........+........+......+.......+...+..............+................+..+...+.......+...+..+....+........+.......+........+.+..+...+............+....+..............+......+.+...+...........+...+..........+.....+............+....+......+......+..+...+............+......+....+.......................+...................+........+.+..+....+.....+.+..............+....+..+.......+..+...+.............+........+.......+.....+.......+...........+.........+..........+.................+.+.....+...+.+...+........+...+......+.........+............+............+...+...+.........+......+.+.......................+..........+..+.............+......+..+.......+..+...+.+..............+...............+...+..........+....... [e2e-predictor] .+..........+.....+.+.....+.+..+...+.........+....+............+...+...........+....+..+...................+.....+.........+.+.....+....+..+.......+.....+...+......................+...+...+.........+............+.........+..+..........+..+......+......+.+..+......+.+...........+.......+...+....................+.......+........+...+................+........+.......+.....+.+........+............+.......+...........+....+.....+.......+.....+++++ [e2e-predictor] ...+...+++++++++++++++++++++++++++++++++++++++++++++*.....+++++++++++++++++++++++++++++++++++++++++++++*..............+.....+.+....................+.......+......+..+....+......+..+++++ [e2e-predictor] ----- [e2e-predictor] Certificate request self-signature ok [e2e-predictor] subject=CN=custom, O=Red Hat [e2e-predictor] secret/seaweedfs-tls-custom created [e2e-predictor] deployment.apps/seaweedfs-tls-custom patched [e2e-predictor] Waiting for patched seaweedfs-tls-custom deployment to be ready... [e2e-predictor] Waiting for deployment "seaweedfs-tls-custom" rollout to finish: 0 out of 1 new replicas have been updated... [e2e-predictor] Waiting for deployment "seaweedfs-tls-custom" rollout to finish: 0 out of 1 new replicas have been updated... [e2e-predictor] Waiting for deployment "seaweedfs-tls-custom" rollout to finish: 0 out of 1 new replicas have been updated... [e2e-predictor] Waiting for deployment "seaweedfs-tls-custom" rollout to finish: 0 of 1 updated replicas are available... [e2e-predictor] deployment "seaweedfs-tls-custom" successfully rolled out [e2e-predictor] job.batch/s3-tls-init-custom created [e2e-predictor] Waiting for S3 TLS init job to complete... [e2e-predictor] job.batch/s3-tls-init-custom condition met [e2e-predictor] Namespace kserve exists. [e2e-predictor] Cleaning up existing serving TLS SeaweedFS resources for idempotency... [e2e-predictor] secret/seaweedfs-tls-serving-artifact serverside-applied [e2e-predictor] service/seaweedfs-tls-serving-service serverside-applied [e2e-predictor] deployment.apps/seaweedfs-tls-serving serverside-applied [e2e-predictor] Waiting for seaweedfs-tls-serving deployment to be ready... [e2e-predictor] Waiting for deployment "seaweedfs-tls-serving" rollout to finish: 0 of 1 updated replicas are available... [e2e-predictor] deployment "seaweedfs-tls-serving" successfully rolled out [e2e-predictor] Configuring SeaweedFS S3 for TLS with Openshift serving certificate and adding models to storage [e2e-predictor] job.batch/s3-tls-init-serving created [e2e-predictor] Waiting for S3 TLS init job to complete... [e2e-predictor] job.batch/s3-tls-init-serving condition met [e2e-predictor] networkpolicy.networking.k8s.io/allow-all created [e2e-predictor] Prepare CI namespace and install ServingRuntimes [e2e-predictor] Setting up CI namespace: kserve-ci-e2e-test [e2e-predictor] Tearing down CI namespace: kserve-ci-e2e-test [e2e-predictor] Namespace kserve-ci-e2e-test does not exist, skipping deletion [e2e-predictor] CI namespace teardown complete [e2e-predictor] Creating namespace kserve-ci-e2e-test [e2e-predictor] namespace/kserve-ci-e2e-test created [e2e-predictor] Applying S3 artifact secret [e2e-predictor] secret/mlpipeline-s3-artifact created [e2e-predictor] Applying storage-config secret [e2e-predictor] secret/storage-config created [e2e-predictor] Creating odh-trusted-ca-bundle configmap [e2e-predictor] configmap/odh-trusted-ca-bundle created [e2e-predictor] Installing ServingRuntimes [e2e-predictor] servingruntime.serving.kserve.io/kserve-huggingfaceserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-huggingfaceserver-multinode created [e2e-predictor] servingruntime.serving.kserve.io/kserve-lgbserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-mlserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-paddleserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-pmmlserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-predictiveserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-sklearnserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-tensorflow-serving created [e2e-predictor] servingruntime.serving.kserve.io/kserve-torchserve created [e2e-predictor] servingruntime.serving.kserve.io/kserve-tritonserver created [e2e-predictor] servingruntime.serving.kserve.io/kserve-xgbserver created [e2e-predictor] CI namespace setup complete [e2e-predictor] Setup complete [e2e-predictor] === E2E cluster / operator summary === [e2e-predictor] Client Version: 4.20.11 [e2e-predictor] Kustomize Version: v5.6.0 [e2e-predictor] Server Version: 4.20.19 [e2e-predictor] Kubernetes Version: v1.33.9 [e2e-predictor] ClusterVersion desired: 4.20.19 [e2e-predictor] ClusterVersion history (latest): 4.20.19 (Completed) [e2e-predictor] CSVs in openshift-keda: [e2e-predictor] custom-metrics-autoscaler.v2.18.1-2 Succeeded [e2e-predictor] CSVs in openshift-operators (ODH / shared operators, filtered): [e2e-predictor] === End E2E cluster / operator summary === [e2e-predictor] /workspace/source [e2e-predictor] REQUESTS_CA_BUNDLE=-----BEGIN CERTIFICATE----- [e2e-predictor] MIIDPDCCAiSgAwIBAgIIL6KUQNFVWCMwDQYJKoZIhvcNAQELBQAwJjESMBAGA1UE [e2e-predictor] CxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdyb290LWNhMB4XDTI2MDQyMjE4MzA0M1oX [e2e-predictor] DTM2MDQxOTE4MzA0M1owJjESMBAGA1UECxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdy [e2e-predictor] b290LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlURPc7j2I36/ [e2e-predictor] GxdJJyCjHraATWYJilpn9+MNttUGIlgfMCq1TiULYkxRxvn/YAm9e1u6f8ctKtQl [e2e-predictor] OBLrc18bN8nEWQxV5UMQVYddzp5gQPrJBCk9o4GhPqC7RpGxgiYtb1mbD22lE3+Q [e2e-predictor] PjYNY5P9kx67yt/Wk8FjhxGniiZI1qrQeJH/75kT2hYV7tygo1P301sUGc3pgWJB [e2e-predictor] SePdhDqYE9CgjyP5X8Wviug2ha4y0/hJIZ6LtW1SnrBzpeun6RSwj93JJhYAS3Q1 [e2e-predictor] cReSranXomi6NpoSSPH/VaCD71RWrUSxn33nH4cpNgtlOlDL6tBEuPd44Sr7SzCA [e2e-predictor] 06+KIvGxqwIDAQABo24wbDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB [e2e-predictor] /zBJBgNVHQ4EQgRA97cIX3Ow1OlXmK46g9jRBjmx9Y95DvO7WvWLBHj9q/ohozN5 [e2e-predictor] RVdWbVXKDqx0ZhVzwbeC6hUD5iYqi8ZG0u7PqDANBgkqhkiG9w0BAQsFAAOCAQEA [e2e-predictor] L1ZrFHNrUWwSyV+4EefgZGHido7wuMISC+5I+wQBFKKAQKfC+LttQh1c+BLBzq/P [e2e-predictor] rGD9yZFSz/6xdYTe4q/pMdlysdJdLtgviN3uNUHnzfP+ftFgGOzyoysU9KJibjbb [e2e-predictor] Cr1Sqi5PB0yUXLJpzf+bnjJQVMHAeNmyTFud1UXlzQCNc5SuusEBGjCyjS1C8kvO [e2e-predictor] Ugal0GgXirIzX0t8j38fIHdlzFqRdkRWC+VhDE2cxIPwrlVM/LXy5Hx+1A2NYiwG [e2e-predictor] 9M76UhJeXXB2pfqa3aU9PGwXCExUYN8xetHO06JpU1G0OqGBmaDKOKafw67AQ8K1 [e2e-predictor] aLprQfM7NuM6V7AewRMHGg== [e2e-predictor] -----END CERTIFICATE----- [e2e-predictor] -----BEGIN CERTIFICATE----- [e2e-predictor] MIIEADCCAuigAwIBAgIIGGgV/NooPUkwDQYJKoZIhvcNAQELBQAwJjESMBAGA1UE [e2e-predictor] CxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdyb290LWNhMB4XDTI2MDQyMjE4MzEyNloX [e2e-predictor] DTI3MDQyMjE4MzEyNlowMDESMBAGA1UEChMJb3BlbnNoaWZ0MRowGAYDVQQDExFv [e2e-predictor] cGVuc2hpZnQtaW5ncmVzczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB [e2e-predictor] AMsOj+V9qDF/NYQinYU/iCeze1QdD2YoGXDn3jT1n91FS3nWOTX8wqf6tR3qHVRy [e2e-predictor] RDAvMiOYoHbiu/Ea2z4BOWR1sx3OfepViFVSJDbHkve9avff1L3euTXQotJ8pZtJ [e2e-predictor] tEFXxBE0zQe53hDnK1cpxVWDv5NYexKxEisYbeu22FJVaGGcN7eV7KgBlhbaO2WO [e2e-predictor] iY2FVOFpJcBNDZrRmoecZhj2mZ5kI7Km1ANl1QixBFU+oO+0VONYfT2n/ua/bSy3 [e2e-predictor] Cjn64c0r+RImTWVX+gEALNIxzWbbJXHwR6xA69h8WMU2Yqy21aox0oHMeypEcYTZ [e2e-predictor] GJILixVPt1V0jKMp+/Ku+O8CAwEAAaOCASYwggEiMA4GA1UdDwEB/wQEAwIFoDAd [e2e-predictor] BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBJBgNV [e2e-predictor] HQ4EQgRA+G36YbqOGtXRuko8GWOJ1qaLWpT34EQLxi68zMMyCW4DDY1NAC72tfAG [e2e-predictor] 8XfosAMFQxkOlxXKuHhXAeAhosgdyjBLBgNVHSMERDBCgED3twhfc7DU6VeYrjqD [e2e-predictor] 2NEGObH1j3kO87ta9YsEeP2r+iGjM3lFV1ZtVcoOrHRmFXPBt4LqFQPmJiqLxkbS [e2e-predictor] 7s+oMEsGA1UdEQREMEKCQCouYXBwcy4yYTQ0ODYzMS02YmM5LTRhMDgtYThjNS1l [e2e-predictor] YTJkZDVlOWJjZGQucHJvZC5rb25mbHV4ZWFhcy5jb20wDQYJKoZIhvcNAQELBQAD [e2e-predictor] ggEBAA8SwPQLApu2+YrXFMJPM5PcNZpcPzs24Wflv9iPULBMsJB29LkHL+/jwmHF [e2e-predictor] TpYSWuUnxPm8cHVk0ZHjaQLi89VBHuOaTD6REc8BkSAyU59sByqcikoMIBQGNwP0 [e2e-predictor] cU8tEBfb0aJRGZRR0BsGVPpXm54Hq+vYlqpkEOe2Iu6K19GeSX/R7C2LZTvYQ+L2 [e2e-predictor] eqBWa9Fnlan4qNVTexeGWFiJi+InTadF8rpkvBfKvsytYwSXzo2KMGZZoAhWZ+BE [e2e-predictor] qWmn3DZIlmBbcDDam6wEpQJL5iyfA3RT6XdYGgbh0HjKThfXDi5ktKvMhSrW28N+ [e2e-predictor] q1X8wV4XQ0sxFgajJ7DT0MSXiHM= [e2e-predictor] -----END CERTIFICATE----- [e2e-predictor] [e2e-predictor] -----BEGIN CERTIFICATE----- [e2e-predictor] MIIDUTCCAjmgAwIBAgIIB6wVPVSHnfswDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE [e2e-predictor] Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTc3Njg4MzE2ODAe [e2e-predictor] Fw0yNjA0MjIxODM5MjhaFw0yODA2MjAxODM5MjlaMDYxNDAyBgNVBAMMK29wZW5z [e2e-predictor] aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3NzY4ODMxNjgwggEiMA0GCSqG [e2e-predictor] SIb3DQEBAQUAA4IBDwAwggEKAoIBAQC28sDmOpxshhnyeMdmT7OCoEDxPPqda6Zs [e2e-predictor] 1f6SZXEJjBCuZsw3d0X/VTqbr/6xKd/oVlLVLj30+pPBsLPCCfW+eoE0z5x+phSX [e2e-predictor] /R0I+qo3mIwXE41fCeEKKalp1r53HJ2AZRxQBbwpy+v1wnsFsFCav0ZVEfgA+qU1 [e2e-predictor] QvpOaukSQ/OPYL2qAS6vbtRoYnrX/yPXOI5SwRsbZlS4Z0aWukl89dujBrnmCZn0 [e2e-predictor] IYjZlzZglBVwGxgAu3PYz8sSYgRkFRd8pYT8OxegB+1Sex1Rd3lpXstZ9Zsjr715 [e2e-predictor] xKej89GehQUOjbLqiXM/mroVYAO3/75oDKcGUqyq+c0zSMd5mS0FAgMBAAGjYzBh [e2e-predictor] MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRJRY/b [e2e-predictor] GSHKtrHQ/2gjLe/4x/vdqjAfBgNVHSMEGDAWgBRJRY/bGSHKtrHQ/2gjLe/4x/vd [e2e-predictor] qjANBgkqhkiG9w0BAQsFAAOCAQEAQ5S9XDOsh+DZdnGxJZg4ckw4WMQl2iVJLiEm [e2e-predictor] 9cIp+Zvra71nbDTwdEib/UGSwTG5ZVWjpNrp1Xncdn8pvGIe7UvHz64TVBlqbtWh [e2e-predictor] AFlP+d72a+9CQJIPKAWc9xobJe5jeNYDV+3Ap8KDuMFTxTdZCjo/pe7tvg9kyINp [e2e-predictor] 9yJFLus/kySS6OJIv3pbEuNorz/w7/BbWW9fzfzn8c5FhrOnhCOX9UtITw86RMQF [e2e-predictor] aEFenHWBLoV1JB7TWN5k5CMgkWrxSJiwXh4DkFkuYDIirUNvxDSdBJXa0mU4cBEu [e2e-predictor] 8GjURQDDY0uyLo0y2BRijlbOqqPsxEnp95sNzmPMBiKuKE8qzg== [e2e-predictor] -----END CERTIFICATE----- [e2e-predictor] Run E2E tests: predictor or kserve_on_openshift [e2e-predictor] Starting E2E functional tests ... [e2e-predictor] Parallelism requested for pytest is 1 [e2e-predictor] ============================= test session starts ============================== [e2e-predictor] platform linux -- Python 3.11.13, pytest-7.4.4, pluggy-1.5.0 -- /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] cachedir: .pytest_cache [e2e-predictor] rootdir: /workspace/source/test/e2e [e2e-predictor] configfile: pytest.ini [e2e-predictor] plugins: httpx-0.30.0, xdist-3.6.1, anyio-4.9.0, cov-5.0.0, asyncio-0.23.8 [e2e-predictor] asyncio: mode=Mode.STRICT [e2e-predictor] created: 1/1 worker [e2e-predictor] 1 worker [55 items] [e2e-predictor] [e2e-predictor] scheduling tests via WorkStealingScheduling [e2e-predictor] [e2e-predictor] batcher/test_batcher.py::test_batcher 2026-04-22 18:47:10.346 3559 kserve INFO [conftest.py:configure_logger():40] Logger configured [e2e-predictor] [e2e-predictor] [gw0] FAILED batcher/test_batcher.py::test_batcher [e2e-predictor] batcher/test_batcher_custom_port.py::test_batcher_custom_port [e2e-predictor] [gw0] FAILED batcher/test_batcher_custom_port.py::test_batcher_custom_port [e2e-predictor] custom/test_custom_model_grpc.py::test_custom_model_grpc [e2e-predictor] [gw0] SKIPPED custom/test_custom_model_grpc.py::test_custom_model_grpc [e2e-predictor] custom/test_ray.py::test_custom_model_http_ray [e2e-predictor] [gw0] SKIPPED custom/test_ray.py::test_custom_model_http_ray [e2e-predictor] logger/test_logger.py::test_kserve_logger [e2e-predictor] [gw0] FAILED logger/test_logger.py::test_kserve_logger [e2e-predictor] predictor/test_autoscaling.py::test_sklearn_kserve_concurrency [e2e-predictor] [gw0] SKIPPED predictor/test_autoscaling.py::test_sklearn_kserve_concurrency [e2e-predictor] predictor/test_autoscaling.py::test_sklearn_kserve_rps [e2e-predictor] [gw0] SKIPPED predictor/test_autoscaling.py::test_sklearn_kserve_rps [e2e-predictor] predictor/test_canary.py::test_canary_rollout [e2e-predictor] [gw0] SKIPPED predictor/test_canary.py::test_canary_rollout [e2e-predictor] predictor/test_canary.py::test_canary_rollout_runtime [e2e-predictor] [gw0] SKIPPED predictor/test_canary.py::test_canary_rollout_runtime [e2e-predictor] predictor/test_lightgbm.py::test_lightgbm_kserve [e2e-predictor] [gw0] FAILED predictor/test_lightgbm.py::test_lightgbm_kserve [e2e-predictor] predictor/test_lightgbm.py::test_lightgbm_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_lightgbm.py::test_lightgbm_runtime_kserve [e2e-predictor] predictor/test_lightgbm.py::test_lightgbm_v2_runtime_mlserver [e2e-predictor] [gw0] FAILED predictor/test_lightgbm.py::test_lightgbm_v2_runtime_mlserver [e2e-predictor] predictor/test_lightgbm.py::test_lightgbm_v2_kserve [e2e-predictor] [gw0] FAILED predictor/test_lightgbm.py::test_lightgbm_v2_kserve [e2e-predictor] predictor/test_lightgbm.py::test_lightgbm_v2_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_lightgbm.py::test_lightgbm_v2_grpc [e2e-predictor] predictor/test_mlflow.py::test_mlflow_v2_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_mlflow.py::test_mlflow_v2_runtime_kserve [e2e-predictor] predictor/test_multi_container_probing.py::test_multi_container_probing [e2e-predictor] [gw0] FAILED predictor/test_multi_container_probing.py::test_multi_container_probing [e2e-predictor] predictor/test_paddle.py::test_paddle [e2e-predictor] [gw0] FAILED predictor/test_paddle.py::test_paddle [e2e-predictor] predictor/test_paddle.py::test_paddle_runtime [e2e-predictor] [gw0] FAILED predictor/test_paddle.py::test_paddle_runtime [e2e-predictor] predictor/test_paddle.py::test_paddle_v2_kserve [e2e-predictor] [gw0] FAILED predictor/test_paddle.py::test_paddle_v2_kserve [e2e-predictor] predictor/test_paddle.py::test_paddle_v2_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_paddle.py::test_paddle_v2_grpc [e2e-predictor] predictor/test_pmml.py::test_pmml_kserve [e2e-predictor] [gw0] FAILED predictor/test_pmml.py::test_pmml_kserve [e2e-predictor] predictor/test_pmml.py::test_pmml_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_pmml.py::test_pmml_runtime_kserve [e2e-predictor] predictor/test_pmml.py::test_pmml_v2_kserve [e2e-predictor] [gw0] FAILED predictor/test_pmml.py::test_pmml_v2_kserve [e2e-predictor] predictor/test_pmml.py::test_pmml_v2_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_pmml.py::test_pmml_v2_grpc [e2e-predictor] predictor/test_pod_watch.py::test_event_storm_prevention_init_container_isolation 2026-04-22 20:40:17.331 3559 kserve.trace DEBUG DUMP kserve-ci-e2e-test/isvc-primary-b95658: [e2e-predictor] {"isvc":{"error":"Failed to get ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"},"deployments":[{"error":"Failed to list deployments for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"pods":[{"error":"Failed to list pods for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"controller_logs":[{"error":"Failed to get controller logs: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}]} [e2e-predictor] 2026-04-22 20:40:17.331 3559 kserve.trace INFO [test_pod_watch.py:dump_debug_info():104] DEBUG DUMP kserve-ci-e2e-test/isvc-primary-b95658: [e2e-predictor] {"isvc":{"error":"Failed to get ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"},"deployments":[{"error":"Failed to list deployments for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"pods":[{"error":"Failed to list pods for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"controller_logs":[{"error":"Failed to get controller logs: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}]} [e2e-predictor] [e2e-predictor] [gw0] FAILED predictor/test_pod_watch.py::test_event_storm_prevention_init_container_isolation [e2e-predictor] predictor/test_pod_watch.py::test_quick_reconciliation_on_init_container_failure 2026-04-22 20:40:17.517 3559 kserve.trace Creating invalid S3 secret and service account [e2e-predictor] 2026-04-22 20:40:17.517 3559 kserve.trace INFO [test_pod_watch.py:test_quick_reconciliation_on_init_container_failure():446] Creating invalid S3 secret and service account [e2e-predictor] [e2e-predictor] [gw0] FAILED predictor/test_pod_watch.py::test_quick_reconciliation_on_init_container_failure [e2e-predictor] predictor/test_predictive.py::test_predictive_sklearn_v1 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_sklearn_v1 [e2e-predictor] predictor/test_predictive.py::test_predictive_xgboost_v1 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_xgboost_v1 [e2e-predictor] predictor/test_predictive.py::test_predictive_lightgbm_v1 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_lightgbm_v1 [e2e-predictor] predictor/test_predictive.py::test_predictive_sklearn_v2 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_sklearn_v2 [e2e-predictor] predictor/test_predictive.py::test_predictive_xgboost_v2 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_xgboost_v2 [e2e-predictor] predictor/test_predictive.py::test_predictive_lightgbm_v2 [e2e-predictor] [gw0] FAILED predictor/test_predictive.py::test_predictive_lightgbm_v2 [e2e-predictor] predictor/test_scheduler_name.py::test_scheduler_name [e2e-predictor] [gw0] FAILED predictor/test_scheduler_name.py::test_scheduler_name [e2e-predictor] predictor/test_sklearn.py::test_sklearn_kserve [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_kserve [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2_mlserver [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_v2_mlserver [e2e-predictor] predictor/test_sklearn.py::test_sklearn_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_runtime_kserve [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2_runtime_mlserver [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_v2_runtime_mlserver [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2 [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_v2 [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_sklearn.py::test_sklearn_v2_grpc [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2_mixed [e2e-predictor] [gw0] FAILED predictor/test_sklearn.py::test_sklearn_v2_mixed [e2e-predictor] predictor/test_sklearn.py::test_sklearn_v2_mixed_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_sklearn.py::test_sklearn_v2_mixed_grpc [e2e-predictor] predictor/test_tensorflow.py::test_tensorflow_kserve [e2e-predictor] [gw0] FAILED predictor/test_tensorflow.py::test_tensorflow_kserve [e2e-predictor] predictor/test_tensorflow.py::test_tensorflow_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_tensorflow.py::test_tensorflow_runtime_kserve [e2e-predictor] predictor/test_triton.py::test_triton [e2e-predictor] [gw0] FAILED predictor/test_triton.py::test_triton [e2e-predictor] predictor/test_xgboost.py::test_xgboost_kserve [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_kserve [e2e-predictor] predictor/test_xgboost.py::test_xgboost_v2_mlserver [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_v2_mlserver [e2e-predictor] predictor/test_xgboost.py::test_xgboost_single_model_file [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_single_model_file [e2e-predictor] predictor/test_xgboost.py::test_xgboost_runtime_kserve [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_runtime_kserve [e2e-predictor] predictor/test_xgboost.py::test_xgboost_v2_runtime_mlserver [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_v2_runtime_mlserver [e2e-predictor] predictor/test_xgboost.py::test_xgboost_v2 [e2e-predictor] [gw0] FAILED predictor/test_xgboost.py::test_xgboost_v2 [e2e-predictor] predictor/test_xgboost.py::test_xgboost_v2_grpc [e2e-predictor] [gw0] SKIPPED predictor/test_xgboost.py::test_xgboost_v2_grpc [e2e-predictor] storagespec/test_s3_storagespec.py::test_sklearn_s3_storagespec_kserve [e2e-predictor] [gw0] FAILED storagespec/test_s3_storagespec.py::test_sklearn_s3_storagespec_kserve [e2e-predictor] storagespec/test_s3_tls_storagespec.py::test_s3_tls_global_custom_cert_storagespec_kserve [e2e-predictor] [gw0] ERROR storagespec/test_s3_tls_storagespec.py::test_s3_tls_global_custom_cert_storagespec_kserve [e2e-predictor] storagespec/test_s3_tls_storagespec.py::test_s3_tls_custom_cert_storagespec_kserve [e2e-predictor] [gw0] ERROR storagespec/test_s3_tls_storagespec.py::test_s3_tls_custom_cert_storagespec_kserve [e2e-predictor] storagespec/test_s3_tls_storagespec.py::test_s3_tls_serving_cert_storagespec_kserve [e2e-predictor] [gw0] FAILED storagespec/test_s3_tls_storagespec.py::test_s3_tls_serving_cert_storagespec_kserve [e2e-predictor] [e2e-predictor] ==================================== ERRORS ==================================== [e2e-predictor] _____ ERROR at setup of test_s3_tls_global_custom_cert_storagespec_kserve ______ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] kserve_client = [e2e-predictor] [e2e-predictor] @pytest.fixture(scope="module") [e2e-predictor] def odh_trusted_ca_bundle_configmap(kserve_client): [e2e-predictor] """Ensure the odh-trusted-ca-bundle configmap exists. [e2e-predictor] [e2e-predictor] The configmap is pre-created by setup-ci-namespace.sh to avoid race [e2e-predictor] conditions when pytest-xdist distributes tests across multiple workers. [e2e-predictor] Namespace teardown handles cleanup. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > kserve_client.core_api.read_namespaced_config_map( [e2e-predictor] name=ODH_TRUSTED_CA_BUNDLE_CONFIGMAP_NAME, namespace=KSERVE_TEST_NAMESPACE [e2e-predictor] ) [e2e-predictor] [e2e-predictor] storagespec/test_s3_tls_storagespec.py:156: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'odh-trusted-ca-bundle', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def read_namespaced_config_map(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """read_namespaced_config_map # noqa: E501 [e2e-predictor] [e2e-predictor] read the specified ConfigMap # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.read_namespaced_config_map(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ConfigMap (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: V1ConfigMap [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.read_namespaced_config_map_with_http_info(name, namespace, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:23231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'odh-trusted-ca-bundle', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['name', 'namespace', 'pretty', 'async_req', '_return_http_data_only', '_preload_content', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['name', 'namespace', 'pretty', 'async_req', '_return_http_data_only', '_preload_content', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'name': 'odh-trusted-ca-bundle', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def read_namespaced_config_map_with_http_info(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """read_namespaced_config_map # noqa: E501 [e2e-predictor] [e2e-predictor] read the specified ConfigMap # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.read_namespaced_config_map_with_http_info(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ConfigMap (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(V1ConfigMap, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'name', [e2e-predictor] 'namespace', [e2e-predictor] 'pretty' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method read_namespaced_config_map" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `read_namespaced_config_map`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `read_namespaced_config_map`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/api/v1/namespaces/{namespace}/configmaps/{name}', 'GET', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='V1ConfigMap', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:23318: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/{namespace}/configmaps/{name}' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = {'name': 'odh-trusted-ca-bundle', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ConfigMap' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = [('name', 'odh-trusted-ca-bundle'), ('namespace', 'kserve-ci-e2e-test')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ConfigMap' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] > return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] > return self.request("GET", url, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] query_params=query_params) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=False, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'multipart/form-data': [e2e-predictor] # must del headers['Content-Type'], or the correct [e2e-predictor] # Content-Type which generated by urllib3 will be [e2e-predictor] # overwritten. [e2e-predictor] del headers['Content-Type'] [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=True, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] # Pass a `string` parameter directly in the body to support [e2e-predictor] # other content types than Json when `body` argument is [e2e-predictor] # provided in serialized form [e2e-predictor] elif isinstance(body, str) or isinstance(body, bytes): [e2e-predictor] request_body = body [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] else: [e2e-predictor] # Cannot generate the request from given parameters [e2e-predictor] msg = """Cannot prepare a request message for provided [e2e-predictor] arguments. Please check that your arguments match [e2e-predictor] declared content type.""" [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] # For `GET`, `HEAD` [e2e-predictor] else: [e2e-predictor] > r = self.pool_manager.request(method, url, [e2e-predictor] fields=query_params, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:217: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None, fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None, urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] > return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:135: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_url( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_ENCODE_URL_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the url. This is useful for request methods like GET, HEAD, DELETE, etc. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": headers} [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] url += "?" + urlencode(fields) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:182: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'redirect': False, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log setup ------------------------------ [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle [e2e-predictor] _________ ERROR at setup of test_s3_tls_custom_cert_storagespec_kserve _________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] kserve_client = [e2e-predictor] [e2e-predictor] @pytest.fixture(scope="module") [e2e-predictor] def odh_trusted_ca_bundle_configmap(kserve_client): [e2e-predictor] """Ensure the odh-trusted-ca-bundle configmap exists. [e2e-predictor] [e2e-predictor] The configmap is pre-created by setup-ci-namespace.sh to avoid race [e2e-predictor] conditions when pytest-xdist distributes tests across multiple workers. [e2e-predictor] Namespace teardown handles cleanup. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > kserve_client.core_api.read_namespaced_config_map( [e2e-predictor] name=ODH_TRUSTED_CA_BUNDLE_CONFIGMAP_NAME, namespace=KSERVE_TEST_NAMESPACE [e2e-predictor] ) [e2e-predictor] [e2e-predictor] storagespec/test_s3_tls_storagespec.py:156: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'odh-trusted-ca-bundle', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def read_namespaced_config_map(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """read_namespaced_config_map # noqa: E501 [e2e-predictor] [e2e-predictor] read the specified ConfigMap # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.read_namespaced_config_map(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ConfigMap (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: V1ConfigMap [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.read_namespaced_config_map_with_http_info(name, namespace, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:23231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'odh-trusted-ca-bundle', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['name', 'namespace', 'pretty', 'async_req', '_return_http_data_only', '_preload_content', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['name', 'namespace', 'pretty', 'async_req', '_return_http_data_only', '_preload_content', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'name': 'odh-trusted-ca-bundle', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def read_namespaced_config_map_with_http_info(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """read_namespaced_config_map # noqa: E501 [e2e-predictor] [e2e-predictor] read the specified ConfigMap # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.read_namespaced_config_map_with_http_info(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ConfigMap (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(V1ConfigMap, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'name', [e2e-predictor] 'namespace', [e2e-predictor] 'pretty' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method read_namespaced_config_map" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `read_namespaced_config_map`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `read_namespaced_config_map`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/api/v1/namespaces/{namespace}/configmaps/{name}', 'GET', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='V1ConfigMap', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:23318: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/{namespace}/configmaps/{name}' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = {'name': 'odh-trusted-ca-bundle', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ConfigMap' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = [('name', 'odh-trusted-ca-bundle'), ('namespace', 'kserve-ci-e2e-test')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ConfigMap' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] > return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] > return self.request("GET", url, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] query_params=query_params) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=False, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'multipart/form-data': [e2e-predictor] # must del headers['Content-Type'], or the correct [e2e-predictor] # Content-Type which generated by urllib3 will be [e2e-predictor] # overwritten. [e2e-predictor] del headers['Content-Type'] [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=True, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] # Pass a `string` parameter directly in the body to support [e2e-predictor] # other content types than Json when `body` argument is [e2e-predictor] # provided in serialized form [e2e-predictor] elif isinstance(body, str) or isinstance(body, bytes): [e2e-predictor] request_body = body [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] else: [e2e-predictor] # Cannot generate the request from given parameters [e2e-predictor] msg = """Cannot prepare a request message for provided [e2e-predictor] arguments. Please check that your arguments match [e2e-predictor] declared content type.""" [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] # For `GET`, `HEAD` [e2e-predictor] else: [e2e-predictor] > r = self.pool_manager.request(method, url, [e2e-predictor] fields=query_params, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:217: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None, fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None, urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] > return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:135: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_url( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_ENCODE_URL_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the url. This is useful for request methods like GET, HEAD, DELETE, etc. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": headers} [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] url += "?" + urlencode(fields) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:182: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'redirect': False, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'GET' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/configmaps/odh-trusted-ca-bundle (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] =================================== FAILURES =================================== [e2e-predictor] _________________________________ test_batcher _________________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_batcher(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-batcher" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] batcher=V1beta1Batcher( [e2e-predictor] max_batch_size=32, [e2e-predictor] max_latency=5000, [e2e-predictor] ), [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] batcher/test_batcher.py:65: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-sklearn-batcher', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-sklearn-batcher. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'sklearn', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T18:47:10Z', 'finalizers': ['inferenceservice.finalizers', 'odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:batcher': {'.': {}, 'f:maxBatchSize': {}, 'f:maxLatency': {}}, 'f:minReplicas': {}, 'f:sklearn': {'.': {}, 'f:name': {}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T18:47:10Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"inferenceservice.finalizers"': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T18:47:10Z'}], 'name': 'isvc-sklearn-batcher', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '14232', 'uid': 'e652a56f-c2fd-4159-afce-3ad1f7ff2797'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'batcher': {'maxBatchSize': 32, 'maxLatency': 5000}, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'sklearn'}, 'name': '', 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] [e2e-predictor] During handling of the above exception, another exception occurred: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_batcher(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-batcher" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] batcher=V1beta1Batcher( [e2e-predictor] max_batch_size=32, [e2e-predictor] max_latency=5000, [e2e-predictor] ), [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] except RuntimeError as e: [e2e-predictor] print( [e2e-predictor] > kserve_client.api_instance.get_namespaced_custom_object( [e2e-predictor] "serving.knative.dev", [e2e-predictor] "v1", [e2e-predictor] KSERVE_TEST_NAMESPACE, [e2e-predictor] "services", [e2e-predictor] service_name + "-predictor", [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] batcher/test_batcher.py:68: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.knative.dev', version = 'v1', namespace = 'kserve-ci-e2e-test' [e2e-predictor] plural = 'services', name = 'isvc-sklearn-batcher-predictor' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.knative.dev', version = 'v1', namespace = 'kserve-ci-e2e-test' [e2e-predictor] plural = 'services', name = 'isvc-sklearn-batcher-predictor' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.knative.dev', 'name': 'isvc-sklearn-batcher-predictor', 'namespace': 'kserve-ci-e2e-test', 'plural': 'services', ...} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'name' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method get_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = {'group': 'serving.knative.dev', 'name': 'isvc-sklearn-batcher-predictor', 'namespace': 'kserve-ci-e2e-test', 'plural': 'services', ...} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-predictor' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = [('group', 'serving.knative.dev'), ('version', 'v1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'services'), ('name', 'isvc-sklearn-batcher-predictor')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-predictor' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] > return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-predictor' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] > return self.request("GET", url, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] query_params=query_params) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-predictor' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=False, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'multipart/form-data': [e2e-predictor] # must del headers['Content-Type'], or the correct [e2e-predictor] # Content-Type which generated by urllib3 will be [e2e-predictor] # overwritten. [e2e-predictor] del headers['Content-Type'] [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=True, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] # Pass a `string` parameter directly in the body to support [e2e-predictor] # other content types than Json when `body` argument is [e2e-predictor] # provided in serialized form [e2e-predictor] elif isinstance(body, str) or isinstance(body, bytes): [e2e-predictor] request_body = body [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] else: [e2e-predictor] # Cannot generate the request from given parameters [e2e-predictor] msg = """Cannot prepare a request message for provided [e2e-predictor] arguments. Please check that your arguments match [e2e-predictor] declared content type.""" [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] # For `GET`, `HEAD` [e2e-predictor] else: [e2e-predictor] r = self.pool_manager.request(method, url, [e2e-predictor] fields=query_params, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] except urllib3.exceptions.SSLError as e: [e2e-predictor] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] [e2e-predictor] if _preload_content: [e2e-predictor] r = RESTResponse(r) [e2e-predictor] [e2e-predictor] # In the python 3, the response.data is bytes. [e2e-predictor] # we need to decode it to string. [e2e-predictor] if six.PY3: [e2e-predictor] r.data = r.data.decode('utf8') [e2e-predictor] [e2e-predictor] # log response body [e2e-predictor] logger.debug("response body: %s", r.data) [e2e-predictor] [e2e-predictor] if not 200 <= r.status <= 299: [e2e-predictor] > raise ApiException(http_resp=r) [e2e-predictor] E kubernetes.client.exceptions.ApiException: (404) [e2e-predictor] E Reason: Not Found [e2e-predictor] E HTTP response headers: HTTPHeaderDict({'Audit-Id': 'f459039d-cf75-4ae5-80bc-555faff62b55', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '43eb87b7-b788-4fb9-a139-5d84c6b6c6b1', 'X-Kubernetes-Pf-Prioritylevel-Uid': '7bbf62a0-ac58-4885-b6fa-1cd5dc03d6b7', 'Date': 'Wed, 22 Apr 2026 18:57:10 GMT', 'Content-Length': '19'}) [e2e-predictor] E HTTP response body: 404 page not found [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-predictor] ------------------------------ Captured log setup ------------------------------ [e2e-predictor] INFO kserve:conftest.py:40 Logger configured [e2e-predictor] ___________________________ test_batcher_custom_port ___________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_batcher_custom_port(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-batcher-custom" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] batcher=V1beta1Batcher( [e2e-predictor] max_batch_size=32, [e2e-predictor] max_latency=5000, [e2e-predictor] ), [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] args=["--http_port=5000"], [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ports=[V1ContainerPort(container_port=5000, protocol="TCP")], [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] batcher/test_batcher_custom_port.py:69: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-sklearn-batcher-custom', namespace = 'kserve-ci-e2e-test' [e2e-predictor] watch = False, timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-sklearn-batcher-custom. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'sklearn', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T18:57:11Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:batcher': {'.': {}, 'f:maxBatchSize': {}, 'f:maxLatency': {}}, 'f:minReplicas': {}, 'f:sklearn': {'.': {}, 'f:args': {}, 'f:name': {}, 'f:ports': {'.': {}, 'k:{"containerPort":5000,"protocol":"TCP"}': {'.': {}, 'f:containerPort': {}, 'f:protocol': {}}}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T18:57:11Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T18:57:11Z'}], 'name': 'isvc-sklearn-batcher-custom', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '18388', 'uid': '60427353-d100-4dc6-a5b8-45035f05e55d'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'batcher': {'maxBatchSize': 32, 'maxLatency': 5000}, 'minReplicas': 1, 'model': {'args': ['--http_port=5000'], 'modelFormat': {'name': 'sklearn'}, 'name': '', 'ports': [{'containerPort': 5000, 'protocol': 'TCP'}], 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] [e2e-predictor] During handling of the above exception, another exception occurred: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_batcher_custom_port(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-batcher-custom" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] batcher=V1beta1Batcher( [e2e-predictor] max_batch_size=32, [e2e-predictor] max_latency=5000, [e2e-predictor] ), [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] args=["--http_port=5000"], [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ports=[V1ContainerPort(container_port=5000, protocol="TCP")], [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] except RuntimeError as e: [e2e-predictor] print( [e2e-predictor] > kserve_client.api_instance.get_namespaced_custom_object( [e2e-predictor] "serving.knative.dev", [e2e-predictor] "v1", [e2e-predictor] KSERVE_TEST_NAMESPACE, [e2e-predictor] "services", [e2e-predictor] service_name + "-predictor", [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] batcher/test_batcher_custom_port.py:72: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.knative.dev', version = 'v1', namespace = 'kserve-ci-e2e-test' [e2e-predictor] plural = 'services', name = 'isvc-sklearn-batcher-custom-predictor' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.knative.dev', version = 'v1', namespace = 'kserve-ci-e2e-test' [e2e-predictor] plural = 'services', name = 'isvc-sklearn-batcher-custom-predictor' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.knative.dev', 'name': 'isvc-sklearn-batcher-custom-predictor', 'namespace': 'kserve-ci-e2e-test', 'plural': 'services', ...} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'name' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method get_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = {'group': 'serving.knative.dev', 'name': 'isvc-sklearn-batcher-custom-predictor', 'namespace': 'kserve-ci-e2e-test', 'plural': 'services', ...} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-custom-predictor' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = [('group', 'serving.knative.dev'), ('version', 'v1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'services'), ('name', 'isvc-sklearn-batcher-custom-predictor')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-custom-predictor' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] > return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-custom-predictor' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] > return self.request("GET", url, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] query_params=query_params) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.knative.dev/v1/namespaces/kserve-ci-e2e-test/services/isvc-sklearn-batcher-custom-predictor' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=False, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'multipart/form-data': [e2e-predictor] # must del headers['Content-Type'], or the correct [e2e-predictor] # Content-Type which generated by urllib3 will be [e2e-predictor] # overwritten. [e2e-predictor] del headers['Content-Type'] [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=True, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] # Pass a `string` parameter directly in the body to support [e2e-predictor] # other content types than Json when `body` argument is [e2e-predictor] # provided in serialized form [e2e-predictor] elif isinstance(body, str) or isinstance(body, bytes): [e2e-predictor] request_body = body [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] else: [e2e-predictor] # Cannot generate the request from given parameters [e2e-predictor] msg = """Cannot prepare a request message for provided [e2e-predictor] arguments. Please check that your arguments match [e2e-predictor] declared content type.""" [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] # For `GET`, `HEAD` [e2e-predictor] else: [e2e-predictor] r = self.pool_manager.request(method, url, [e2e-predictor] fields=query_params, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] except urllib3.exceptions.SSLError as e: [e2e-predictor] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] [e2e-predictor] if _preload_content: [e2e-predictor] r = RESTResponse(r) [e2e-predictor] [e2e-predictor] # In the python 3, the response.data is bytes. [e2e-predictor] # we need to decode it to string. [e2e-predictor] if six.PY3: [e2e-predictor] r.data = r.data.decode('utf8') [e2e-predictor] [e2e-predictor] # log response body [e2e-predictor] logger.debug("response body: %s", r.data) [e2e-predictor] [e2e-predictor] if not 200 <= r.status <= 299: [e2e-predictor] > raise ApiException(http_resp=r) [e2e-predictor] E kubernetes.client.exceptions.ApiException: (404) [e2e-predictor] E Reason: Not Found [e2e-predictor] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '5d5412fe-6437-44c5-884c-a6d4aea5ea99', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '43eb87b7-b788-4fb9-a139-5d84c6b6c6b1', 'X-Kubernetes-Pf-Prioritylevel-Uid': '7bbf62a0-ac58-4885-b6fa-1cd5dc03d6b7', 'Date': 'Wed, 22 Apr 2026 19:07:11 GMT', 'Content-Length': '19'}) [e2e-predictor] E HTTP response body: 404 page not found [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-predictor] ______________________________ test_kserve_logger ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_kserve_logger(rest_v1_client): [e2e-predictor] msg_dumper = "message-dumper" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] containers=[ [e2e-predictor] V1Container( [e2e-predictor] name="kserve-container", [e2e-predictor] image="gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] ], [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=msg_dumper, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(msg_dumper, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] logger/test_logger.py:67: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'message-dumper', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService message-dumper. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:07:11Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:containers': {}, 'f:minReplicas': {}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:07:11Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:07:11Z'}], 'name': 'message-dumper', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '22458', 'uid': '68f95871-2b3c-4811-943c-d8bcb83f95f4'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'containers': [{'image': 'gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display', 'name': 'kserve-container', 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '10m', 'memory': '128Mi'}}}], 'minReplicas': 1}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] _____________________________ test_lightgbm_kserve _____________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_lightgbm_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-lightgbm" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] lightgbm=V1beta1LightGBMSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_lightgbm.py:70: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-lightgbm', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-lightgbm. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'lightgbm', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:17:12Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:lightgbm': {'.': {}, 'f:name': {}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}, 'f:minReplicas': {}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:17:12Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:17:12Z'}], 'name': 'isvc-lightgbm', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '26847', 'uid': '15470665-9c13-4436-9229-266cdacd22f1'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'lightgbm'}, 'name': '', 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/lightgbm/iris'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] _________________________ test_lightgbm_runtime_kserve _________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_lightgbm_runtime_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-lightgbm-runtime" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="lightgbm", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_lightgbm.py:113: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-lightgbm-runtime', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-lightgbm-runtime. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'lightgbm', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:27:12Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:minReplicas': {}, 'f:model': {'.': {}, 'f:modelFormat': {'.': {}, 'f:name': {}}, 'f:name': {}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:27:12Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:27:12Z'}], 'name': 'isvc-lightgbm-runtime', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '30931', 'uid': '8a8bc3bc-220a-40c2-8226-3fe4022bd573'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'lightgbm'}, 'name': '', 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/lightgbm/iris'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] ______________________ test_lightgbm_v2_runtime_mlserver _______________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_lightgbm_v2_runtime_mlserver(rest_v2_client): [e2e-predictor] service_name = "isvc-lightgbm-v2-runtime" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="lightgbm", [e2e-predictor] ), [e2e-predictor] runtime="kserve-mlserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/v2/iris", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "1", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_lightgbm.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-lightgbm-v2-runtime', namespace = 'kserve-ci-e2e-test' [e2e-predictor] watch = False, timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-lightgbm-v2-runtime. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'lightgbm', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:37:12Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:minReplicas': {}, 'f:model': {'.': {}, 'f:modelFormat': {'.': {}, 'f:name': {}}, 'f:name': {}, 'f:protocolVersion': {}, 'f:readinessProbe': {'.': {}, 'f:httpGet': {'.': {}, 'f:path': {}, 'f:port': {}}, 'f:initialDelaySeconds': {}}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:runtime': {}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:37:12Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:37:12Z'}], 'name': 'isvc-lightgbm-v2-runtime', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '35010', 'uid': '1d74470f-5ac5-4b32-9d15-03c0bb48cc41'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'lightgbm'}, 'name': '', 'protocolVersion': 'v2', 'readinessProbe': {'httpGet': {'path': '/v2/models/isvc-lightgbm-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, 'resources': {'limits': {'cpu': '1', 'memory': '1Gi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-mlserver', 'storageUri': 'gs://kfserving-examples/models/lightgbm/v2/iris'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] ___________________________ test_lightgbm_v2_kserve ____________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_lightgbm_v2_kserve(rest_v2_client): [e2e-predictor] service_name = "isvc-lightgbm-v2-kserve" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="lightgbm", [e2e-predictor] ), [e2e-predictor] runtime="kserve-lgbserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/v2/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "1", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_lightgbm.py:229: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-lightgbm-v2-kserve', namespace = 'kserve-ci-e2e-test' [e2e-predictor] watch = False, timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-lightgbm-v2-kserve. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'lightgbm', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:47:13Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:minReplicas': {}, 'f:model': {'.': {}, 'f:modelFormat': {'.': {}, 'f:name': {}}, 'f:name': {}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:runtime': {}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:47:13Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:47:13Z'}], 'name': 'isvc-lightgbm-v2-kserve', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '39400', 'uid': 'e9e3a3af-e558-4884-99b7-0d860de0c882'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'lightgbm'}, 'name': '', 'resources': {'limits': {'cpu': '1', 'memory': '1Gi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-lgbserver', 'storageUri': 'gs://kfserving-examples/models/lightgbm/v2/iris'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] ________________________ test_mlflow_v2_runtime_kserve _________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_mlflow_v2_runtime_kserve(rest_v2_client): [e2e-predictor] service_name = "isvc-mlflow-v2-runtime" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="mlflow", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/mlflow/wine", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "1", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, namespace=KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_mlflow.py:77: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-mlflow-v2-runtime', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-mlflow-v2-runtime. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'mlflow', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T19:57:13Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:minReplicas': {}, 'f:model': {'.': {}, 'f:modelFormat': {'.': {}, 'f:name': {}}, 'f:name': {}, 'f:protocolVersion': {}, 'f:readinessProbe': {'.': {}, 'f:httpGet': {'.': {}, 'f:path': {}, 'f:port': {}}, 'f:initialDelaySeconds': {}}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T19:57:13Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T19:57:13Z'}], 'name': 'isvc-mlflow-v2-runtime', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '43504', 'uid': 'b46780d9-0a0e-4c73-a87a-cc33f252726a'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'mlflow'}, 'name': '', 'protocolVersion': 'v2', 'readinessProbe': {'httpGet': {'path': '/v2/models/isvc-mlflow-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, 'resources': {'limits': {'cpu': '1', 'memory': '1Gi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/mlflow/wine'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] _________________________ test_multi_container_probing _________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.kserve_on_openshift [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_multi_container_probing(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-mcp" [e2e-predictor] logger.info("Creating InferenceService %s", service_name) [e2e-predictor] [e2e-predictor] # Create the main predictor container [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] max_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "100m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "200m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] liveness_probe=V1Probe( [e2e-predictor] http_get=V1HTTPGetAction( [e2e-predictor] path="/v1/models/" + service_name, port=8080, scheme="HTTP" [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] period_seconds=10, [e2e-predictor] ), [e2e-predictor] readiness_probe=V1Probe( [e2e-predictor] http_get=V1HTTPGetAction( [e2e-predictor] path="/v1/models/" + service_name, port=8080, scheme="HTTP" [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] period_seconds=10, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] containers=[ [e2e-predictor] V1Container( [e2e-predictor] name="kserve-agent", [e2e-predictor] image="quay.io/opendatahub/kserve-agent:latest", [e2e-predictor] ports=[V1ContainerPort(container_port=9081, protocol="TCP")], [e2e-predictor] env=[ [e2e-predictor] V1EnvVar(name="AGENT_TARGET_PORT", value="8080"), [e2e-predictor] V1EnvVar(name="AGENT_TARGET_HOST", value="localhost"), [e2e-predictor] V1EnvVar( [e2e-predictor] name="SERVING_READINESS_PROBE", [e2e-predictor] value='{"tcpSocket":{"port":8080},"initialDelaySeconds":60,"periodSeconds":10}', [e2e-predictor] ), [e2e-predictor] ], [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] liveness_probe=V1Probe( [e2e-predictor] tcp_socket=V1TCPSocketAction( [e2e-predictor] port=9081, [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=60, [e2e-predictor] period_seconds=10, [e2e-predictor] ), [e2e-predictor] readiness_probe=V1Probe( [e2e-predictor] tcp_socket=V1TCPSocketAction( [e2e-predictor] port=9081, [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=60, [e2e-predictor] period_seconds=10, [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] ], [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] annotations={ [e2e-predictor] "serving.kserve.io/autoscalerClass": "none", [e2e-predictor] "serving.kserve.io/DeploymentMode": "RawDeployment", [e2e-predictor] }, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec( [e2e-predictor] predictor=predictor, [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] > kserve_client.wait_isvc_ready(service_name, KSERVE_TEST_NAMESPACE) [e2e-predictor] [e2e-predictor] predictor/test_multi_container_probing.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-sklearn-mcp', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-sklearn-mcp. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'sklearn', 'serving.kserve.io/DeploymentMode': 'RawDeployment', 'serving.kserve.io/autoscalerClass': 'none', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T20:07:14Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'.': {}, 'f:serving.kserve.io/DeploymentMode': {}, 'f:serving.kserve.io/autoscalerClass': {}}, 'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:containers': {}, 'f:maxReplicas': {}, 'f:minReplicas': {}, 'f:sklearn': {'.': {}, 'f:livenessProbe': {'.': {}, 'f:httpGet': {'.': {}, 'f:path': {}, 'f:port': {}, 'f:scheme': {}}, 'f:initialDelaySeconds': {}, 'f:periodSeconds': {}}, 'f:name': {}, 'f:readinessProbe': {'.': {}, 'f:httpGet': {'.': {}, 'f:path': {}, 'f:port': {}, 'f:scheme': {}}, 'f:initialDelaySeconds': {}, 'f:periodSeconds': {}}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T20:07:13Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T20:07:14Z'}], 'name': 'isvc-sklearn-mcp', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '47577', 'uid': '9ab57a30-6a31-4e05-bb55-8774963a2760'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'containers': [{'env': [{'name': 'AGENT_TARGET_PORT', 'value': '8080'}, {'name': 'AGENT_TARGET_HOST', 'value': 'localhost'}, {'name': 'SERVING_READINESS_PROBE', 'value': '{"tcpSocket":{"port":8080},"initialDelaySeconds":60,"periodSeconds":10}'}], 'image': 'quay.io/opendatahub/kserve-agent:latest', 'livenessProbe': {'initialDelaySeconds': 60, 'periodSeconds': 10, 'tcpSocket': {'port': 9081}}, 'name': 'kserve-agent', 'ports': [{'containerPort': 9081, 'protocol': 'TCP'}], 'readinessProbe': {'initialDelaySeconds': 60, 'periodSeconds': 10, 'tcpSocket': {'port': 9081}}, 'resources': {'limits': {'cpu': '100m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}}], 'maxReplicas': 1, 'minReplicas': 1, 'model': {'livenessProbe': {'httpGet': {'path': '/v1/models/isvc-sklearn-mcp', 'port': 8080, 'scheme': 'HTTP'}, 'initialDelaySeconds': 30, 'periodSeconds': 10}, 'modelFormat': {'name': 'sklearn'}, 'name': '', 'readinessProbe': {'httpGet': {'path': '/v1/models/isvc-sklearn-mcp', 'port': 8080, 'scheme': 'HTTP'}, 'initialDelaySeconds': 30, 'periodSeconds': 10}, 'resources': {'limits': {'cpu': '200m', 'memory': '256Mi'}, 'requests': {'cpu': '100m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] INFO e2e.predictor.test_multi_container_probing:test_multi_container_probing.py:63 Creating InferenceService isvc-sklearn-mcp [e2e-predictor] _________________________________ test_paddle __________________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_paddle(rest_v1_client): [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] paddle=V1beta1PaddleServerSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/paddle/resnet", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "200m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "200m", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] service_name = "isvc-paddle" [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] kserve_client.wait_isvc_ready( [e2e-predictor] service_name, namespace=KSERVE_TEST_NAMESPACE, timeout_seconds=720 [e2e-predictor] ) [e2e-predictor] except RuntimeError as e: [e2e-predictor] pods = kserve_client.core_api.list_namespaced_pod( [e2e-predictor] KSERVE_TEST_NAMESPACE, [e2e-predictor] label_selector="serving.kserve.io/inferenceservice={}".format(service_name), [e2e-predictor] ) [e2e-predictor] for pod in pods.items: [e2e-predictor] logging.info(pod) [e2e-predictor] > raise e [e2e-predictor] [e2e-predictor] predictor/test_paddle.py:80: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_paddle(rest_v1_client): [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] paddle=V1beta1PaddleServerSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/paddle/resnet", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "200m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "200m", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] service_name = "isvc-paddle" [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] > kserve_client.wait_isvc_ready( [e2e-predictor] service_name, namespace=KSERVE_TEST_NAMESPACE, timeout_seconds=720 [e2e-predictor] ) [e2e-predictor] [e2e-predictor] predictor/test_paddle.py:70: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-paddle', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 720, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] return [e2e-predictor] [e2e-predictor] current_isvc = self.get(name, namespace=namespace, version=version) [e2e-predictor] if expected_generation is None: [e2e-predictor] > raise RuntimeError( [e2e-predictor] "Timeout to start the InferenceService {}. \ [e2e-predictor] The InferenceService is as following: {}".format( [e2e-predictor] name, current_isvc [e2e-predictor] ) [e2e-predictor] ) [e2e-predictor] E RuntimeError: Timeout to start the InferenceService isvc-paddle. The InferenceService is as following: {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'modelFormat': 'paddle', 'serving.kserve.io/deploymentMode': 'Standard'}, 'creationTimestamp': '2026-04-22T20:17:14Z', 'finalizers': ['odh.inferenceservice.finalizers'], 'generation': 1, 'labels': {'networking.kserve.io/visibility': 'exposed'}, 'managedFields': [{'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:networking.kserve.io/visibility': {}}}, 'f:spec': {'.': {}, 'f:predictor': {'.': {}, 'f:minReplicas': {}, 'f:paddle': {'.': {}, 'f:name': {}, 'f:resources': {'.': {}, 'f:limits': {'.': {}, 'f:cpu': {}, 'f:memory': {}}, 'f:requests': {'.': {}, 'f:cpu': {}, 'f:memory': {}}}, 'f:storageUri': {}}}}}, 'manager': 'OpenAPI-Generator', 'operation': 'Update', 'time': '2026-04-22T20:17:14Z'}, {'apiVersion': 'serving.kserve.io/v1beta1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:finalizers': {'.': {}, 'v:"odh.inferenceservice.finalizers"': {}}}}, 'manager': 'manager', 'operation': 'Update', 'time': '2026-04-22T20:17:14Z'}], 'name': 'isvc-paddle', 'namespace': 'kserve-ci-e2e-test', 'resourceVersion': '51656', 'uid': '18eb1f5b-40aa-4900-9161-7215ac587b4e'}, 'spec': {'predictor': {'automountServiceAccountToken': False, 'minReplicas': 1, 'model': {'modelFormat': {'name': 'paddle'}, 'name': '', 'resources': {'limits': {'cpu': '200m', 'memory': '1Gi'}, 'requests': {'cpu': '200m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/paddle/resnet'}}}} [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:418: RuntimeError [e2e-predictor] _____________________________ test_paddle_runtime ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_paddle_runtime(rest_v1_client): [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="paddle", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/paddle/resnet", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "200m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "200m", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] service_name = "isvc-paddle-runtime" [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] kserve_client.create(isvc) [e2e-predictor] try: [e2e-predictor] > kserve_client.wait_isvc_ready( [e2e-predictor] service_name, namespace=KSERVE_TEST_NAMESPACE, timeout_seconds=720 [e2e-predictor] ) [e2e-predictor] [e2e-predictor] predictor/test_paddle.py:124: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-paddle-runtime', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 720, polling_interval = 10, version = 'v1beta1' [e2e-predictor] expected_generation = None [e2e-predictor] [e2e-predictor] def wait_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, # pylint:disable=too-many-arguments [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] polling_interval=10, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): [e2e-predictor] """ [e2e-predictor] Waiting for inference service ready, print out the inference service if timeout. [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for waiting, default to 600s. [e2e-predictor] Print out the InferenceService if timeout. [e2e-predictor] :param polling_interval: The time interval to poll status [e2e-predictor] :param version: api group version [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, [e2e-predictor] namespace=namespace, [e2e-predictor] timeout_seconds=timeout_seconds, [e2e-predictor] generation=expected_generation or 0, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] for _ in range(round(timeout_seconds / polling_interval)): [e2e-predictor] time.sleep(polling_interval) [e2e-predictor] > if self.is_isvc_ready( [e2e-predictor] name, [e2e-predictor] namespace=namespace, [e2e-predictor] version=version, [e2e-predictor] expected_generation=expected_generation, [e2e-predictor] ): [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:408: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-paddle-runtime', namespace = 'kserve-ci-e2e-test' [e2e-predictor] version = 'v1beta1', expected_generation = None [e2e-predictor] [e2e-predictor] def is_isvc_ready( [e2e-predictor] self, [e2e-predictor] name, [e2e-predictor] namespace=None, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] expected_generation=None, [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Check if the inference service is ready. [e2e-predictor] :param version: [e2e-predictor] :param name: inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param expected_generation: optional minimum observed generation to consider ready [e2e-predictor] :return: [e2e-predictor] """ [e2e-predictor] > kfsvc_status = self.get(name, namespace=namespace, version=version) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:361: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'isvc-paddle-runtime', namespace = 'kserve-ci-e2e-test', watch = False [e2e-predictor] timeout_seconds = 600, version = 'v1beta1' [e2e-predictor] [e2e-predictor] def get( [e2e-predictor] self, [e2e-predictor] name=None, [e2e-predictor] namespace=None, [e2e-predictor] watch=False, [e2e-predictor] timeout_seconds=600, [e2e-predictor] version=constants.KSERVE_V1BETA1_VERSION, [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Get the inference service [e2e-predictor] :param name: existing inference service name [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :param version: api group version [e2e-predictor] :return: inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_default_target_namespace() [e2e-predictor] [e2e-predictor] if name: [e2e-predictor] if watch: [e2e-predictor] isvc_watch( [e2e-predictor] name=name, namespace=namespace, timeout_seconds=timeout_seconds [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] try: [e2e-predictor] > return self.api_instance.get_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] name, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:196: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] name = 'isvc-paddle-runtime', kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] name = 'isvc-paddle-runtime', kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'name': 'isvc-paddle-runtime', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', ...} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-predictor] """get_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Returns a namespace scoped custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: the custom resource's group (required) [e2e-predictor] :param str version: the custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param str name: the custom object's name (required) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'name' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method get_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'name': 'isvc-paddle-runtime', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', ...} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] method = 'GET' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices'), ('name', 'isvc-paddle-runtime')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] > return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] > return self.request("GET", url, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] query_params=query_params) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=False, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif headers['Content-Type'] == 'multipart/form-data': [e2e-predictor] # must del headers['Content-Type'], or the correct [e2e-predictor] # Content-Type which generated by urllib3 will be [e2e-predictor] # overwritten. [e2e-predictor] del headers['Content-Type'] [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] fields=post_params, [e2e-predictor] encode_multipart=True, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] # Pass a `string` parameter directly in the body to support [e2e-predictor] # other content types than Json when `body` argument is [e2e-predictor] # provided in serialized form [e2e-predictor] elif isinstance(body, str) or isinstance(body, bytes): [e2e-predictor] request_body = body [e2e-predictor] r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] else: [e2e-predictor] # Cannot generate the request from given parameters [e2e-predictor] msg = """Cannot prepare a request message for provided [e2e-predictor] arguments. Please check that your arguments match [e2e-predictor] declared content type.""" [e2e-predictor] raise ApiException(status=0, reason=msg) [e2e-predictor] # For `GET`, `HEAD` [e2e-predictor] else: [e2e-predictor] > r = self.pool_manager.request(method, url, [e2e-predictor] fields=query_params, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] headers=headers) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:217: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None, fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None, urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] > return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:135: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] fields = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_url( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_ENCODE_URL_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the url. This is useful for request methods like GET, HEAD, DELETE, etc. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": headers} [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] url += "?" + urlencode(fields) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:182: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'redirect': False, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...ving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = ConnectionResetError(104, 'Connection reset by peer'), clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = ConnectTimeoutError( BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'GET' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectionResetError(104, 'Connection reset by peer')': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com timed out. (connect timeout=None)')': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-paddle-runtime [e2e-predictor] ____________________________ test_paddle_v2_kserve _____________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_paddle_v2_kserve(rest_v2_client): [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="paddle", [e2e-predictor] ), [e2e-predictor] runtime="kserve-paddleserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/paddle/resnet", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "200m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "200m", "memory": "1Gi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] service_name = "isvc-paddle-v2-kserve" [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_paddle.py:177: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': '200m', 'memory': '1Gi'}, 'requests': {'cpu': '200m', 'memory': '256Mi'}}, 'runtime': 'kserve-paddleserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': '200m', 'memory': '1Gi'}, 'requests': {'cpu': '200m', 'memory': '256Mi'}}, 'runtime': 'kserve-paddleserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': '200m', 'memory': '1Gi'}, 'requests': {'cpu': '200m', 'memory': '256Mi'}}, 'runtime': 'kserve-paddleserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': '200m', 'memory': '1Gi'}, 'requests': {'cpu': '200m', 'memory': '256Mi'}}, 'runtime': 'kserve-paddleserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....leserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....leserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "256Mi"}}, "runtime": "kserve-paddleserver", "storageUri": "gs://kfserving-examples/models/paddle/resnet"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _______________________________ test_pmml_kserve _______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_pmml_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-pmml" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] pmml=V1beta1PMMLSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/pmml", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_pmml.py:66: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ___________________________ test_pmml_runtime_kserve ___________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_pmml_runtime_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-pmml-runtime" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="pmml", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/pmml", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_pmml.py:119: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...ory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/pmml'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "512Mi"}, "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _____________________________ test_pmml_v2_kserve ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_pmml_v2_kserve(rest_v2_client): [e2e-predictor] service_name = "isvc-pmml-v2-kserve" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="pmml", [e2e-predictor] ), [e2e-predictor] runtime="kserve-pmmlserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/pmml", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_pmml.py:169: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'runtime': 'kserve-pmmlserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'runtime': 'kserve-pmmlserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'runtime': 'kserve-pmmlserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'runtime': 'kserve-pmmlserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...": "10m", "memory": "256Mi"}}, "runtime": "kserve-pmmlserver", "storageUri": "gs://kfserving-examples/models/pmml"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _____________ test_event_storm_prevention_init_container_isolation _____________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.raw [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_event_storm_prevention_init_container_isolation(rest_v1_client): [e2e-predictor] """ [e2e-predictor] Test that init container status changes on one ISVC don't cause unwanted modifications [e2e-predictor] to unrelated ISVCs (event storm prevention). [e2e-predictor] [e2e-predictor] The controller may reconcile an ISVC for legitimate reasons (e.g., [e2e-predictor] HTTPRoute status updates from Istio, deployment status changes) without making any [e2e-predictor] changes. This is acceptable. The real concern is if the secondary ISVC's events [e2e-predictor] cause the primary ISVC to be MODIFIED (resourceVersion change). [e2e-predictor] [e2e-predictor] Test flow: [e2e-predictor] 1. Creates a "primary" ISVC that will successfully load a model from GCS [e2e-predictor] 2. Waits for the primary ISVC to become ready [e2e-predictor] 3. Records baseline resourceVersion [e2e-predictor] 4. Creates a "secondary" ISVC with invalid S3 credentials that will fail [e2e-predictor] 5. Waits for the secondary ISVC to show failure status [e2e-predictor] 6. Verifies the primary ISVC's resourceVersion is unchanged [e2e-predictor] """ [e2e-predictor] suffix = str(uuid.uuid4())[:6] [e2e-predictor] primary_name = f"isvc-primary-{suffix}" [e2e-predictor] secondary_name = f"isvc-secondary-{suffix}" [e2e-predictor] invalid_sa_name = f"invalid-s3-sa-{suffix}" [e2e-predictor] invalid_secret_name = f"invalid-s3-secret-{suffix}" [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Create primary ISVC with a valid GCS storage URI (no credentials needed) [e2e-predictor] primary_predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] primary_isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=primary_name, namespace=KSERVE_TEST_NAMESPACE [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=primary_predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] > with managed_isvc(kserve_client, primary_isvc): [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:343: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def __enter__(self): [e2e-predictor] # do not keep args and kwds alive unnecessarily [e2e-predictor] # they are only needed for recreation, which is not possible anymore [e2e-predictor] del self.args, self.kwds, self.func [e2e-predictor] try: [e2e-predictor] > return next(self.gen) [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/contextlib.py:137: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] kserve_client = [e2e-predictor] isvc = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] [e2e-predictor] @contextmanager [e2e-predictor] def managed_isvc(kserve_client: KServeClient, isvc: V1beta1InferenceService): [e2e-predictor] """ [e2e-predictor] Context manager that handles ISVC lifecycle: creation, error dumping, and cleanup. [e2e-predictor] [e2e-predictor] Usage: [e2e-predictor] with managed_isvc(kserve_client, isvc): [e2e-predictor] # ISVC is already created [e2e-predictor] # ... test logic ... [e2e-predictor] # On any exception: dumps debug info for the ISVC [e2e-predictor] # On exit: deletes the ISVC [e2e-predictor] """ [e2e-predictor] assert isvc.metadata is not None, "ISVC must have metadata" [e2e-predictor] assert isvc.metadata.name is not None, "ISVC must have a name" [e2e-predictor] assert isvc.metadata.namespace is not None, "ISVC must have a namespace" [e2e-predictor] name = isvc.metadata.name [e2e-predictor] namespace = isvc.metadata.namespace [e2e-predictor] error_occurred = False [e2e-predictor] try: [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:131: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'name': 'isvc-primary-b95658', 'n...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'name': 'isvc-primary-b95658', 'n...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'name': 'isvc-primary-b95658', 'n...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'name': 'isvc-primary-b95658', 'n...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b...Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b...Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b...nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"name": "isvc-primary-b95658", "..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager [e2e-predictor] INFO kserve.trace:test_pod_watch.py:104 DEBUG DUMP kserve-ci-e2e-test/isvc-primary-b95658: [e2e-predictor] {"isvc":{"error":"Failed to get ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"},"deployments":[{"error":"Failed to list deployments for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/apps/v1/namespaces/kserve-ci-e2e-test/deployments?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"pods":[{"error":"Failed to list pods for ISVC isvc-primary-b95658: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/pods?labelSelector=serving.kserve.io%2Finferenceservice%3Disvc-primary-b95658 (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}],"controller_logs":[{"error":"Failed to get controller logs: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve/pods?labelSelector=control-plane%3Dkserve-controller-manager (Caused by NameResolutionError(\"HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)\"))"}]} [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices/isvc-primary-b95658 [e2e-predictor] _____________ test_quick_reconciliation_on_init_container_failure ______________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.raw [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_quick_reconciliation_on_init_container_failure(): [e2e-predictor] """ [e2e-predictor] Test that when an init container fails (e.g., invalid storage credentials), [e2e-predictor] the owning InferenceService quickly reconciles and reflects the failure in its status. [e2e-predictor] [e2e-predictor] This test: [e2e-predictor] 1. Creates an ISVC with invalid S3 credentials [e2e-predictor] 2. Monitors the ISVC status for failure detection [e2e-predictor] 3. Validates that failure status is populated within a reasonable timeframe [e2e-predictor] 4. Verifies the failure message contains relevant error information [e2e-predictor] """ [e2e-predictor] suffix = str(uuid.uuid4())[:6] [e2e-predictor] isvc_name = f"isvc-init-fail-{suffix}" [e2e-predictor] invalid_sa_name = f"fail-s3-sa-{suffix}" [e2e-predictor] invalid_secret_name = f"fail-s3-secret-{suffix}" [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Create invalid S3 credentials [e2e-predictor] logger.info("Creating invalid S3 secret and service account") [e2e-predictor] > create_invalid_s3_secret(KSERVE_TEST_NAMESPACE, invalid_secret_name) [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:447: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] namespace = 'kserve-ci-e2e-test', secret_name = 'fail-s3-secret-586023' [e2e-predictor] [e2e-predictor] def create_invalid_s3_secret(namespace: str, secret_name: str): [e2e-predictor] core_api = client.CoreV1Api() [e2e-predictor] secret = client.V1Secret( [e2e-predictor] api_version="v1", [e2e-predictor] kind="Secret", [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=secret_name, [e2e-predictor] namespace=namespace, [e2e-predictor] annotations={ [e2e-predictor] "serving.kserve.io/s3-endpoint": "s3.amazonaws.com", [e2e-predictor] "serving.kserve.io/s3-region": "us-east-1", [e2e-predictor] "serving.kserve.io/s3-usehttps": "1", [e2e-predictor] "serving.kserve.io/s3-verifyssl": "1", [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] type="Opaque", [e2e-predictor] string_data={ [e2e-predictor] "AWS_ACCESS_KEY_ID": "INVALID_ACCESS_KEY_ID_12345", [e2e-predictor] "AWS_SECRET_ACCESS_KEY": "INVALID_SECRET_ACCESS_KEY_67890", [e2e-predictor] }, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > core_api.delete_namespaced_secret(secret_name, namespace) [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:200: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'fail-s3-secret-586023', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def delete_namespaced_secret(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """delete_namespaced_secret # noqa: E501 [e2e-predictor] [e2e-predictor] delete a Secret # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.delete_namespaced_secret(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the Secret (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. [e2e-predictor] :param bool ignore_store_read_error_with_cluster_breaking_potential: if set to true, it will trigger an unsafe deletion of the resource in case the normal deletion flow fails with a corrupt object error. A resource is considered corrupt if it can not be retrieved from the underlying storage successfully because of a) its data can not be transformed e.g. decryption failure, or b) it fails to decode into an object. NOTE: unsafe deletion ignores finalizer constraints, skips precondition checks, and removes the object from the storage. WARNING: This may potentially break the cluster if the workload associated with the resource being unsafe-deleted relies on normal deletion flow. Use only if you REALLY know what you are doing. The default value is false, and the user must opt in to enable it [e2e-predictor] :param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. [e2e-predictor] :param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. [e2e-predictor] :param V1DeleteOptions body: [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: V1Status [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.delete_namespaced_secret_with_http_info(name, namespace, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:13283: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'fail-s3-secret-586023', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['name', 'namespace', 'pretty', 'dry_run', 'grace_period_seconds', 'ignore_store_read_error_with_cluster_breaking_potential', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['name', 'namespace', 'pretty', 'dry_run', 'grace_period_seconds', 'ignore_store_read_error_with_cluster_breaking_potential', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'name': 'fail-s3-secret-586023', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def delete_namespaced_secret_with_http_info(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """delete_namespaced_secret # noqa: E501 [e2e-predictor] [e2e-predictor] delete a Secret # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.delete_namespaced_secret_with_http_info(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the Secret (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. [e2e-predictor] :param bool ignore_store_read_error_with_cluster_breaking_potential: if set to true, it will trigger an unsafe deletion of the resource in case the normal deletion flow fails with a corrupt object error. A resource is considered corrupt if it can not be retrieved from the underlying storage successfully because of a) its data can not be transformed e.g. decryption failure, or b) it fails to decode into an object. NOTE: unsafe deletion ignores finalizer constraints, skips precondition checks, and removes the object from the storage. WARNING: This may potentially break the cluster if the workload associated with the resource being unsafe-deleted relies on normal deletion flow. Use only if you REALLY know what you are doing. The default value is false, and the user must opt in to enable it [e2e-predictor] :param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. [e2e-predictor] :param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. [e2e-predictor] :param V1DeleteOptions body: [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(V1Status, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'name', [e2e-predictor] 'namespace', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'grace_period_seconds', [e2e-predictor] 'ignore_store_read_error_with_cluster_breaking_potential', [e2e-predictor] 'orphan_dependents', [e2e-predictor] 'propagation_policy', [e2e-predictor] 'body' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method delete_namespaced_secret" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `delete_namespaced_secret`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `delete_namespaced_secret`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'grace_period_seconds' in local_var_params and local_var_params['grace_period_seconds'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('gracePeriodSeconds', local_var_params['grace_period_seconds'])) # noqa: E501 [e2e-predictor] if 'ignore_store_read_error_with_cluster_breaking_potential' in local_var_params and local_var_params['ignore_store_read_error_with_cluster_breaking_potential'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('ignoreStoreReadErrorWithClusterBreakingPotential', local_var_params['ignore_store_read_error_with_cluster_breaking_potential'])) # noqa: E501 [e2e-predictor] if 'orphan_dependents' in local_var_params and local_var_params['orphan_dependents'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('orphanDependents', local_var_params['orphan_dependents'])) # noqa: E501 [e2e-predictor] if 'propagation_policy' in local_var_params and local_var_params['propagation_policy'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('propagationPolicy', local_var_params['propagation_policy'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/api/v1/namespaces/{namespace}/secrets/{name}', 'DELETE', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='V1Status', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:13394: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/{namespace}/secrets/{name}' [e2e-predictor] method = 'DELETE' [e2e-predictor] path_params = {'name': 'fail-s3-secret-586023', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1Status' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] method = 'DELETE' [e2e-predictor] path_params = [('name', 'fail-s3-secret-586023'), ('namespace', 'kserve-ci-e2e-test')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1Status' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PUT": [e2e-predictor] return self.rest_client.PUT(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PATCH": [e2e-predictor] return self.rest_client.PATCH(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "DELETE": [e2e-predictor] > return self.rest_client.DELETE(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:415: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def DELETE(self, url, headers=None, query_params=None, body=None, [e2e-predictor] _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("DELETE", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:270: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None, fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None, urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] > return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:135: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_url( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_ENCODE_URL_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the url. This is useful for request methods like GET, HEAD, DELETE, etc. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": headers} [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] url += "?" + urlencode(fields) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:182: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'redirect': False, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023 (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] [e2e-predictor] During handling of the above exception, another exception occurred: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.raw [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_quick_reconciliation_on_init_container_failure(): [e2e-predictor] """ [e2e-predictor] Test that when an init container fails (e.g., invalid storage credentials), [e2e-predictor] the owning InferenceService quickly reconciles and reflects the failure in its status. [e2e-predictor] [e2e-predictor] This test: [e2e-predictor] 1. Creates an ISVC with invalid S3 credentials [e2e-predictor] 2. Monitors the ISVC status for failure detection [e2e-predictor] 3. Validates that failure status is populated within a reasonable timeframe [e2e-predictor] 4. Verifies the failure message contains relevant error information [e2e-predictor] """ [e2e-predictor] suffix = str(uuid.uuid4())[:6] [e2e-predictor] isvc_name = f"isvc-init-fail-{suffix}" [e2e-predictor] invalid_sa_name = f"fail-s3-sa-{suffix}" [e2e-predictor] invalid_secret_name = f"fail-s3-secret-{suffix}" [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Create invalid S3 credentials [e2e-predictor] logger.info("Creating invalid S3 secret and service account") [e2e-predictor] create_invalid_s3_secret(KSERVE_TEST_NAMESPACE, invalid_secret_name) [e2e-predictor] create_service_account_with_secret( [e2e-predictor] KSERVE_TEST_NAMESPACE, invalid_sa_name, invalid_secret_name [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Create ISVC with invalid S3 credentials [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] service_account_name=invalid_sa_name, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="s3://nonexistent-bucket-xyz123/invalid/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=isvc_name, namespace=KSERVE_TEST_NAMESPACE [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] creation_time = time.time() [e2e-predictor] with managed_isvc(kserve_client, isvc): [e2e-predictor] # Wait for failure status to be populated [e2e-predictor] logger.info("Created ISVC %s, waiting for failure status...", isvc_name) [e2e-predictor] failure_status = wait_for_isvc_failure_status( [e2e-predictor] kserve_client, isvc_name, timeout_seconds=180, poll_interval=5.0 [e2e-predictor] ) [e2e-predictor] [e2e-predictor] failure_detection_time = time.time() [e2e-predictor] time_to_failure = failure_detection_time - creation_time [e2e-predictor] [e2e-predictor] # Validate failure was detected [e2e-predictor] assert failure_status is not None, ( [e2e-predictor] f"ISVC {isvc_name} did not report failure status within timeout. " [e2e-predictor] f"The init container failure should trigger quick reconciliation and status update." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] logger.info( [e2e-predictor] "Failure status detected in %.2f seconds: %s", [e2e-predictor] time_to_failure, [e2e-predictor] failure_status, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Validate failure info contains expected fields [e2e-predictor] last_failure = failure_status.get("lastFailureInfo", {}) [e2e-predictor] assert last_failure.get("reason") is not None, ( [e2e-predictor] "lastFailureInfo.reason should be populated" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # The transition status should indicate blocked by failed load [e2e-predictor] transition_status = failure_status.get("transitionStatus") [e2e-predictor] logger.info("Transition status: %s", transition_status) [e2e-predictor] [e2e-predictor] # Check conditions for failure indication [e2e-predictor] conditions = get_isvc_conditions(kserve_client, isvc_name) [e2e-predictor] ready_condition = next( [e2e-predictor] (c for c in conditions if c.get("type") == "Ready"), None [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if ready_condition: [e2e-predictor] logger.info("Ready condition: %s", ready_condition) [e2e-predictor] # The service should not be ready due to init container failure [e2e-predictor] assert ready_condition.get("status") != "True", ( [e2e-predictor] "ISVC should not be Ready when init container fails" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Validate reasonable time to failure detection [e2e-predictor] # The pod watch should trigger reconciliation quickly when init container status changes [e2e-predictor] assert time_to_failure < 180, ( [e2e-predictor] f"Failure detection took too long ({time_to_failure:.2f}s). " [e2e-predictor] f"Pod watch should trigger quick reconciliation on init container failure." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] logger.info( [e2e-predictor] "Quick reconciliation validated: Failure detected in %.2f seconds", [e2e-predictor] time_to_failure, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] # Cleanup non-ISVC resources (ISVCs are cleaned up by managed_isvc) [e2e-predictor] > delete_service_account(KSERVE_TEST_NAMESPACE, invalid_sa_name) [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:534: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] namespace = 'kserve-ci-e2e-test', sa_name = 'fail-s3-sa-586023' [e2e-predictor] [e2e-predictor] def delete_service_account(namespace: str, sa_name: str): [e2e-predictor] core_api = client.CoreV1Api() [e2e-predictor] try: [e2e-predictor] > core_api.delete_namespaced_service_account(sa_name, namespace) [e2e-predictor] [e2e-predictor] predictor/test_pod_watch.py:236: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'fail-s3-sa-586023', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def delete_namespaced_service_account(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """delete_namespaced_service_account # noqa: E501 [e2e-predictor] [e2e-predictor] delete a ServiceAccount # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.delete_namespaced_service_account(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ServiceAccount (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. [e2e-predictor] :param bool ignore_store_read_error_with_cluster_breaking_potential: if set to true, it will trigger an unsafe deletion of the resource in case the normal deletion flow fails with a corrupt object error. A resource is considered corrupt if it can not be retrieved from the underlying storage successfully because of a) its data can not be transformed e.g. decryption failure, or b) it fails to decode into an object. NOTE: unsafe deletion ignores finalizer constraints, skips precondition checks, and removes the object from the storage. WARNING: This may potentially break the cluster if the workload associated with the resource being unsafe-deleted relies on normal deletion flow. Use only if you REALLY know what you are doing. The default value is false, and the user must opt in to enable it [e2e-predictor] :param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. [e2e-predictor] :param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. [e2e-predictor] :param V1DeleteOptions body: [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: V1ServiceAccount [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.delete_namespaced_service_account_with_http_info(name, namespace, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:13599: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'fail-s3-sa-586023', namespace = 'kserve-ci-e2e-test' [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['name', 'namespace', 'pretty', 'dry_run', 'grace_period_seconds', 'ignore_store_read_error_with_cluster_breaking_potential', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-predictor] all_params = ['name', 'namespace', 'pretty', 'dry_run', 'grace_period_seconds', 'ignore_store_read_error_with_cluster_breaking_potential', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'name': 'fail-s3-sa-586023', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def delete_namespaced_service_account_with_http_info(self, name, namespace, **kwargs): # noqa: E501 [e2e-predictor] """delete_namespaced_service_account # noqa: E501 [e2e-predictor] [e2e-predictor] delete a ServiceAccount # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.delete_namespaced_service_account_with_http_info(name, namespace, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the ServiceAccount (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. [e2e-predictor] :param bool ignore_store_read_error_with_cluster_breaking_potential: if set to true, it will trigger an unsafe deletion of the resource in case the normal deletion flow fails with a corrupt object error. A resource is considered corrupt if it can not be retrieved from the underlying storage successfully because of a) its data can not be transformed e.g. decryption failure, or b) it fails to decode into an object. NOTE: unsafe deletion ignores finalizer constraints, skips precondition checks, and removes the object from the storage. WARNING: This may potentially break the cluster if the workload associated with the resource being unsafe-deleted relies on normal deletion flow. Use only if you REALLY know what you are doing. The default value is false, and the user must opt in to enable it [e2e-predictor] :param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. [e2e-predictor] :param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. [e2e-predictor] :param V1DeleteOptions body: [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(V1ServiceAccount, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'name', [e2e-predictor] 'namespace', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'grace_period_seconds', [e2e-predictor] 'ignore_store_read_error_with_cluster_breaking_potential', [e2e-predictor] 'orphan_dependents', [e2e-predictor] 'propagation_policy', [e2e-predictor] 'body' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method delete_namespaced_service_account" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `delete_namespaced_service_account`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `delete_namespaced_service_account`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'grace_period_seconds' in local_var_params and local_var_params['grace_period_seconds'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('gracePeriodSeconds', local_var_params['grace_period_seconds'])) # noqa: E501 [e2e-predictor] if 'ignore_store_read_error_with_cluster_breaking_potential' in local_var_params and local_var_params['ignore_store_read_error_with_cluster_breaking_potential'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('ignoreStoreReadErrorWithClusterBreakingPotential', local_var_params['ignore_store_read_error_with_cluster_breaking_potential'])) # noqa: E501 [e2e-predictor] if 'orphan_dependents' in local_var_params and local_var_params['orphan_dependents'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('orphanDependents', local_var_params['orphan_dependents'])) # noqa: E501 [e2e-predictor] if 'propagation_policy' in local_var_params and local_var_params['propagation_policy'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('propagationPolicy', local_var_params['propagation_policy'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/api/v1/namespaces/{namespace}/serviceaccounts/{name}', 'DELETE', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='V1ServiceAccount', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:13710: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/{namespace}/serviceaccounts/{name}' [e2e-predictor] method = 'DELETE' [e2e-predictor] path_params = {'name': 'fail-s3-sa-586023', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ServiceAccount' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] method = 'DELETE' [e2e-predictor] path_params = [('name', 'fail-s3-sa-586023'), ('namespace', 'kserve-ci-e2e-test')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = [], files = {}, response_type = 'V1ServiceAccount' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PUT": [e2e-predictor] return self.rest_client.PUT(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PATCH": [e2e-predictor] return self.rest_client.PATCH(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "DELETE": [e2e-predictor] > return self.rest_client.DELETE(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:415: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def DELETE(self, url, headers=None, query_params=None, body=None, [e2e-predictor] _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("DELETE", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:270: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None, fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None, urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] > return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:135: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] urlopen_kw = {'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_url( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_ENCODE_URL_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the url. This is useful for request methods like GET, HEAD, DELETE, etc. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": headers} [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] url += "?" + urlencode(fields) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:182: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'headers': {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}, 'preload_content': True, 'redirect': False, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] body = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'DELETE' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023 (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] INFO kserve.trace:test_pod_watch.py:446 Creating invalid S3 secret and service account [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/fail-s3-secret-586023 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023 [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/serviceaccounts/fail-s3-sa-586023 [e2e-predictor] __________________________ test_predictive_sklearn_v1 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_sklearn_v1(rest_v1_client): [e2e-predictor] service_name = "isvc-predictive-sklearn" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="sklearn"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:68: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] __________________________ test_predictive_xgboost_v1 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_xgboost_v1(rest_v1_client): [e2e-predictor] service_name = "isvc-predictive-xgboost" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="xgboost"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/1.5/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:108: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________________ test_predictive_lightgbm_v1 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_lightgbm_v1(rest_v1_client): [e2e-predictor] service_name = "isvc-predictive-lightgbm" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="lightgbm"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:148: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...00m', 'memory': '256Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-predictiveserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....veserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....veserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] __________________________ test_predictive_sklearn_v2 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_sklearn_v2(rest_v2_client): [e2e-predictor] service_name = "isvc-predictive-sklearn-v2" [e2e-predictor] protocol_version = "v2" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="sklearn"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:197: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-sklearn-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-sklearn-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-sklearn-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-sklearn-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] __________________________ test_predictive_xgboost_v2 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_xgboost_v2(rest_v2_client): [e2e-predictor] service_name = "isvc-predictive-xgboost-v2" [e2e-predictor] protocol_version = "v2" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="xgboost"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/1.5/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:251: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-xgboost-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-xgboost-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-xgboost-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... {'httpGet': {'path': '/v2/models/isvc-predictive-xgboost-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________________ test_predictive_lightgbm_v2 __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_predictive_lightgbm_v2(rest_v2_client): [e2e-predictor] service_name = "isvc-predictive-lightgbm-v2" [e2e-predictor] protocol_version = "v2" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat(name="lightgbm"), [e2e-predictor] runtime="kserve-predictiveserver", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] storage_uri="gs://kfserving-examples/models/lightgbm/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_predictive.py:305: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...{'httpGet': {'path': '/v2/models/isvc-predictive-lightgbm-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...{'httpGet': {'path': '/v2/models/isvc-predictive-lightgbm-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...{'httpGet': {'path': '/v2/models/isvc-predictive-lightgbm-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...{'httpGet': {'path': '/v2/models/isvc-predictive-lightgbm-v2/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....veserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....veserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ry": "128Mi"}}, "runtime": "kserve-predictiveserver", "storageUri": "gs://kfserving-examples/models/lightgbm/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _____________________________ test_scheduler_name ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.kserve_on_openshift [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_scheduler_name(rest_v1_client): [e2e-predictor] scheduler_name = "kserve-scheduler" [e2e-predictor] service_name = "isvc-sklearn-scheduler" [e2e-predictor] logger.info("Creating InferenceService %s", service_name) [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] scheduler_name=scheduler_name, # This scheduler doesn't exist, but pods should still be created [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] annotations={ [e2e-predictor] "serving.kserve.io/autoscalerClass": "none" # Adding autoscaler annotation [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_scheduler_name.py:79: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': {'serving.kserv... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': {'serving.kserv... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': {'serving.kserv... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': {'serving.kserv... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'serving.kserve.i...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'serving.kserve.i...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'serving.kserve.i...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'annotations': {'serving.kserve.i...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"servin...Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"servin...Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"servin...nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"annotations": {"serving.kserve...., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] INFO e2e.predictor.test_scheduler_name:test_scheduler_name.py:52 Creating InferenceService isvc-sklearn-scheduler [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _____________________________ test_sklearn_kserve ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:74: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/sklearn/1.0/model'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ___________________________ test_sklearn_v2_mlserver ___________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_v2_mlserver(rest_v2_client): [e2e-predictor] service_name = "sklearn-v2-mlserver" [e2e-predictor] protocol_version = "v2" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:120: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...0}, 'resources': {'limits': {'cpu': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...0}, 'resources': {'limits': {'cpu': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...0}, 'resources': {'limits': {'cpu': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...0}, 'resources': {'limits': {'cpu': '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________________ test_sklearn_runtime_kserve __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.kourier [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_runtime_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-runtime" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="sklearn", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/newsgroup/model.joblib", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "2", "memory": "2Gi"}, [e2e-predictor] limits={"cpu": "2", "memory": "4Gi"}, [e2e-predictor] ), [e2e-predictor] args=["--workers=2"], [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:170: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... 'name': '', 'resources': {'limits': {'cpu': '2', 'memory': '4Gi'}, 'requests': {'cpu': '2', 'memory': '2Gi'}}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... 'name': '', 'resources': {'limits': {'cpu': '2', 'memory': '4Gi'}, 'requests': {'cpu': '2', 'memory': '2Gi'}}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... 'name': '', 'resources': {'limits': {'cpu': '2', 'memory': '4Gi'}, 'requests': {'cpu': '2', 'memory': '2Gi'}}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... 'name': '', 'resources': {'limits': {'cpu': '2', 'memory': '4Gi'}, 'requests': {'cpu': '2', 'memory': '2Gi'}}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....geUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....geUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...ts": {"cpu": "2", "memory": "2Gi"}}, "storageUri": "gs://kfserving-examples/models/sklearn/newsgroup/model.joblib"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _______________________ test_sklearn_v2_runtime_mlserver _______________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_v2_runtime_mlserver(rest_v2_client): [e2e-predictor] service_name = "isvc-sklearn-v2-runtime" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="sklearn", [e2e-predictor] ), [e2e-predictor] runtime="kserve-mlserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:240: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-sklearn-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-sklearn-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-sklearn-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-sklearn-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _______________________________ test_sklearn_v2 ________________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_v2(rest_v2_client): [e2e-predictor] service_name = "isvc-sklearn-v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="sklearn", [e2e-predictor] ), [e2e-predictor] runtime="kserve-sklearnserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.0/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:289: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....rver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...y": "128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.0/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ____________________________ test_sklearn_v2_mixed _____________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_v2_mixed(rest_v2_client): [e2e-predictor] service_name = "isvc-sklearn-v2-mixed" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="sklearn", [e2e-predictor] ), [e2e-predictor] runtime="kserve-sklearnserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/sklearn/1.3/mixedtype", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_sklearn.py:406: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/... '100m', 'memory': '512Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-sklearnserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..."128Mi"}}, "runtime": "kserve-sklearnserver", "storageUri": "gs://kfserving-examples/models/sklearn/1.3/mixedtype"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ____________________________ test_tensorflow_kserve ____________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_tensorflow_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-tensorflow" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] tensorflow=V1beta1TFServingSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/tensorflow/flowers", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_tensorflow.py:63: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....i"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....i"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ________________________ test_tensorflow_runtime_kserve ________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_tensorflow_runtime_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-tensorflow-runtime" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="tensorflow", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/tensorflow/flowers", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "256Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_tensorflow.py:106: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'requests': {'cpu': '10m', 'memory': '256Mi'}}, 'storageUri': 'gs://kfserving-examples/models/tensorflow/flowers'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....i"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....i"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "requests": {"cpu": "10m", "memory": "256Mi"}}, "storageUri": "gs://kfserving-examples/models/tensorflow/flowers"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________________________ test_triton __________________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_triton(rest_v2_client): [e2e-predictor] service_name = "isvc-triton" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] triton=V1beta1TritonSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/torchscript", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "10m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "512Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_triton.py:65: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...512Mi'}, 'requests': {'cpu': '10m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/torchscript'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...512Mi'}, 'requests': {'cpu': '10m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/torchscript'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...512Mi'}, 'requests': {'cpu': '10m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/torchscript'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...512Mi'}, 'requests': {'cpu': '10m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/torchscript'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....: "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....: "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...12Mi"}, "requests": {"cpu": "10m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/torchscript"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _____________________________ test_xgboost_kserve ______________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-xgboost" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] xgboost=V1beta1XGBoostSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/1.5/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:68: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au...None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ___________________________ test_xgboost_v2_mlserver ___________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_v2_mlserver(rest_v2_client): [e2e-predictor] service_name = "isvc-xgboost-v2-mlserver" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] xgboost=V1beta1XGBoostSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/iris", [e2e-predictor] env=[V1EnvVar(name="MLSERVER_MODEL_PARALLEL_WORKERS", value="0")], [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "1024Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:117: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au...None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.... "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.... "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...4Mi"}, "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ________________________ test_xgboost_single_model_file ________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_single_model_file(rest_v2_client): [e2e-predictor] service_name = "xgboost-v2-mlserver" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] xgboost=V1beta1XGBoostSpec( [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/iris/model.bst", [e2e-predictor] env=[V1EnvVar(name="MLSERVER_MODEL_PARALLEL_WORKERS", value="0")], [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "1024Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au...None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ...nts': None, [e2e-predictor] 'working_dir': None}}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...Probe': {'httpGet': {'path': '/v2/models/xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...Probe': {'httpGet': {'path': '/v2/models/xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...Probe': {'httpGet': {'path': '/v2/models/xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...Probe': {'httpGet': {'path': '/v2/models/xgboost-v2-mlserver/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking...., "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking...., "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...quests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/iris/model.bst"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________________ test_xgboost_runtime_kserve __________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_runtime_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-xgboost-runtime" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="xgboost", [e2e-predictor] ), [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/1.5/model", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:220: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'storageUri': 'gs://kfserving-examples/models/xgboost/1.5/model'}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io..., "requests": {"cpu": "50m", "memory": "128Mi"}}, "storageUri": "gs://kfserving-examples/models/xgboost/1.5/model"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _______________________ test_xgboost_v2_runtime_mlserver _______________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_v2_runtime_mlserver(rest_v2_client): [e2e-predictor] service_name = "isvc-xgboost-v2-runtime" [e2e-predictor] protocol_version = "v2" [e2e-predictor] [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="xgboost", [e2e-predictor] ), [e2e-predictor] runtime="kserve-mlserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/iris", [e2e-predictor] protocol_version=protocol_version, [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "1024Mi"}, [e2e-predictor] ), [e2e-predictor] readiness_probe=client.V1Probe( [e2e-predictor] http_get=client.V1HTTPGetAction( [e2e-predictor] path=f"/v2/models/{service_name}/ready", port=8080 [e2e-predictor] ), [e2e-predictor] initial_delay_seconds=30, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:272: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...e': {'httpGet': {'path': '/v2/models/isvc-xgboost-v2-runtime/ready', 'port': 8080}, 'initialDelaySeconds': 30}, ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...m", "memory": "128Mi"}}, "runtime": "kserve-mlserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _______________________________ test_xgboost_v2 ________________________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v2_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_xgboost_v2(rest_v2_client): [e2e-predictor] service_name = "isvc-xgboost-v2" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] model=V1beta1ModelSpec( [e2e-predictor] model_format=V1beta1ModelFormat( [e2e-predictor] name="xgboost", [e2e-predictor] ), [e2e-predictor] runtime="kserve-xgbserver", [e2e-predictor] storage_uri="gs://kfserving-examples/models/xgboost/iris", [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "1024Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] predictor/test_xgboost.py:321: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '1024Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-xgbserver', ...}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '1024Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-xgbserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '1024Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-xgbserver', ...}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/...u': '100m', 'memory': '1024Mi'}, 'requests': {'cpu': '50m', 'memory': '128Mi'}}, 'runtime': 'kserve-xgbserver', ...}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io...", "memory": "128Mi"}}, "runtime": "kserve-xgbserver", "storageUri": "gs://kfserving-examples/models/xgboost/iris"}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] ______________________ test_sklearn_s3_storagespec_kserve ______________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] rest_v1_client = [e2e-predictor] [e2e-predictor] @pytest.mark.predictor [e2e-predictor] @pytest.mark.path_based_routing [e2e-predictor] @pytest.mark.asyncio(scope="session") [e2e-predictor] async def test_sklearn_s3_storagespec_kserve(rest_v1_client): [e2e-predictor] service_name = "isvc-sklearn-s3" [e2e-predictor] predictor = V1beta1PredictorSpec( [e2e-predictor] min_replicas=1, [e2e-predictor] sklearn=V1beta1SKLearnSpec( [e2e-predictor] storage=V1beta1StorageSpec( [e2e-predictor] key="localS3", [e2e-predictor] path="sklearn", [e2e-predictor] parameters={"bucket": "example-models"}, [e2e-predictor] ), [e2e-predictor] resources=V1ResourceRequirements( [e2e-predictor] requests={"cpu": "50m", "memory": "128Mi"}, [e2e-predictor] limits={"cpu": "100m", "memory": "256Mi"}, [e2e-predictor] ), [e2e-predictor] ), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] isvc = V1beta1InferenceService( [e2e-predictor] api_version=constants.KSERVE_V1BETA1, [e2e-predictor] kind=constants.KSERVE_KIND_INFERENCESERVICE, [e2e-predictor] metadata=client.V1ObjectMeta( [e2e-predictor] name=service_name, [e2e-predictor] namespace=KSERVE_TEST_NAMESPACE, [e2e-predictor] labels={ [e2e-predictor] constants.KSERVE_LABEL_NETWORKING_VISIBILITY: constants.KSERVE_LABEL_NETWORKING_VISIBILITY_EXPOSED, [e2e-predictor] }, [e2e-predictor] ), [e2e-predictor] spec=V1beta1InferenceServiceSpec(predictor=predictor), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] kserve_client = KServeClient( [e2e-predictor] config_file=os.environ.get("KUBECONFIG", "~/.kube/config") [e2e-predictor] ) [e2e-predictor] > kserve_client.create(isvc) [e2e-predictor] [e2e-predictor] storagespec/test_s3_storagespec.py:68: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] inferenceservice = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] namespace = 'kserve-ci-e2e-test', watch = False, timeout_seconds = 600 [e2e-predictor] [e2e-predictor] def create( [e2e-predictor] self, inferenceservice, namespace=None, watch=False, timeout_seconds=600 [e2e-predictor] ): # pylint:disable=inconsistent-return-statements [e2e-predictor] """ [e2e-predictor] Create the inference service [e2e-predictor] :param inferenceservice: inference service object [e2e-predictor] :param namespace: defaults to current or default namespace [e2e-predictor] :param watch: True to watch the created service until timeout elapsed or status is ready [e2e-predictor] :param timeout_seconds: timeout seconds for watch, default to 600s [e2e-predictor] :return: created inference service [e2e-predictor] """ [e2e-predictor] [e2e-predictor] version = inferenceservice.api_version.split("/")[1] [e2e-predictor] [e2e-predictor] if namespace is None: [e2e-predictor] namespace = utils.get_isvc_namespace(inferenceservice) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] > outputs = self.api_instance.create_namespaced_custom_object( [e2e-predictor] constants.KSERVE_GROUP, [e2e-predictor] version, [e2e-predictor] namespace, [e2e-predictor] constants.KSERVE_PLURAL_INFERENCESERVICE, [e2e-predictor] inferenceservice, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/kserve/api/kserve_client.py:145: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: object [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] group = 'serving.kserve.io', version = 'v1beta1' [e2e-predictor] namespace = 'kserve-ci-e2e-test', plural = 'inferenceservices' [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None}, ...} [e2e-predictor] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-predictor] """create_namespaced_custom_object # noqa: E501 [e2e-predictor] [e2e-predictor] Creates a namespace scoped Custom object # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str group: The custom resource's group name (required) [e2e-predictor] :param str version: The custom resource's version (required) [e2e-predictor] :param str namespace: The custom resource's namespace (required) [e2e-predictor] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-predictor] :param object body: The JSON schema of the Resource to create. (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'group', [e2e-predictor] 'version', [e2e-predictor] 'namespace', [e2e-predictor] 'plural', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method create_namespaced_custom_object" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'group' is set [e2e-predictor] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['group'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'version' is set [e2e-predictor] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['version'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'plural' is set [e2e-predictor] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['plural'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'group' in local_var_params: [e2e-predictor] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-predictor] if 'version' in local_var_params: [e2e-predictor] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] if 'plural' in local_var_params: [e2e-predictor] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='object', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'inferenceservices', 'version': 'v1beta1'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'api_version': 'serving.kserve.io/v1beta1', [e2e-predictor] 'kind': 'InferenceService', [e2e-predictor] 'metadata': {'annotations': None, [e2e-predictor] ... 'worker_spec': None, [e2e-predictor] 'xgboost': None}, [e2e-predictor] 'transformer': None}, [e2e-predictor] 'status': None} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] method = 'POST' [e2e-predictor] path_params = [('group', 'serving.kserve.io'), ('version', 'v1beta1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'inferenceservices')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'memory': '128Mi'}}, 'storage': {'key': 'localS3', 'parameters': {'bucket': 'example-models'}, 'path': 'sklearn'}}}}} [e2e-predictor] post_params = [], files = {}, response_type = 'object' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'memory': '128Mi'}}, 'storage': {'key': 'localS3', 'parameters': {'bucket': 'example-models'}, 'path': 'sklearn'}}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] > return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'memory': '128Mi'}}, 'storage': {'key': 'localS3', 'parameters': {'bucket': 'example-models'}, 'path': 'sklearn'}}}}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("POST", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'apiVersion': 'serving.kserve.io/v1beta1', 'kind': 'InferenceService', 'metadata': {'labels': {'networking.kserve.io/..., 'memory': '128Mi'}}, 'storage': {'key': 'localS3', 'parameters': {'bucket': 'example-models'}, 'path': 'sklearn'}}}}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....ocalS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....ocalS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking....nt': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata...', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', p...443, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] body = '{"apiVersion": "serving.kserve.io/v1beta1", "kind": "InferenceService", "metadata": {"labels": {"networking.kserve.io... "memory": "128Mi"}}, "storage": {"key": "localS3", "parameters": {"bucket": "example-models"}, "path": "sklearn"}}}}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python', 'Content-Type': 'application/json'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'POST' [e2e-predictor] url = '/apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /apis/serving.kserve.io/v1beta1/namespaces/kserve-ci-e2e-test/inferenceservices [e2e-predictor] _________________ test_s3_tls_serving_cert_storagespec_kserve __________________ [e2e-predictor] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] > sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:204: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] address = ('a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', 6443) [e2e-predictor] timeout = None, source_address = None, socket_options = [(6, 1, 1)] [e2e-predictor] [e2e-predictor] def create_connection( [e2e-predictor] address: tuple[str, int], [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] source_address: tuple[str, int] | None = None, [e2e-predictor] socket_options: _TYPE_SOCKET_OPTIONS | None = None, [e2e-predictor] ) -> socket.socket: [e2e-predictor] """Connect to *address* and return the socket object. [e2e-predictor] [e2e-predictor] Convenience function. Connect to *address* (a 2-tuple ``(host, [e2e-predictor] port)``) and return the socket object. Passing the optional [e2e-predictor] *timeout* parameter will set the timeout on the socket instance [e2e-predictor] before attempting to connect. If no *timeout* is supplied, the [e2e-predictor] global default timeout setting returned by :func:`socket.getdefaulttimeout` [e2e-predictor] is used. If *source_address* is set it must be a tuple of (host, port) [e2e-predictor] for the socket to bind as a source address before making the connection. [e2e-predictor] An host of '' or port 0 tells the OS to use the default. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] host, port = address [e2e-predictor] if host.startswith("["): [e2e-predictor] host = host.strip("[]") [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Using the value from allowed_gai_family() in the context of getaddrinfo lets [e2e-predictor] # us select whether to work with IPv4 DNS records, IPv6 records, or both. [e2e-predictor] # The original create_connection function always returns all records. [e2e-predictor] family = allowed_gai_family() [e2e-predictor] [e2e-predictor] try: [e2e-predictor] host.encode("idna") [e2e-predictor] except UnicodeError: [e2e-predictor] raise LocationParseError(f"'{host}', label empty or too long") from None [e2e-predictor] [e2e-predictor] > for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/connection.py:60: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] host = 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' [e2e-predictor] port = 6443, family = [e2e-predictor] type = , proto = 0, flags = 0 [e2e-predictor] [e2e-predictor] def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): [e2e-predictor] """Resolve host and port into list of address info entries. [e2e-predictor] [e2e-predictor] Translate the host/port argument into a sequence of 5-tuples that contain [e2e-predictor] all the necessary arguments for creating a socket connected to that service. [e2e-predictor] host is a domain name, a string representation of an IPv4/v6 address or [e2e-predictor] None. port is a string service name such as 'http', a numeric port number or [e2e-predictor] None. By passing None as the value of host and port, you can pass NULL to [e2e-predictor] the underlying C API. [e2e-predictor] [e2e-predictor] The family, type and proto arguments can be optionally specified in order to [e2e-predictor] narrow the list of addresses returned. Passing zero as a value for each of [e2e-predictor] these arguments selects the full range of results. [e2e-predictor] """ [e2e-predictor] # We override this function since we want to translate the numeric family [e2e-predictor] # and socket type values to enum constants. [e2e-predictor] addrlist = [] [e2e-predictor] > for res in _socket.getaddrinfo(host, port, family, type, proto, flags): [e2e-predictor] E socket.gaierror: [Errno -2] Name or service not known [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/socket.py:974: gaierror [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] > response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:787: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] self._validate_conn(conn) [e2e-predictor] except (SocketTimeout, BaseSSLError) as e: [e2e-predictor] self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # _validate_conn() starts the connection to an HTTPS proxy [e2e-predictor] # so we need to wrap errors with 'ProxyError' here too. [e2e-predictor] except ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] BaseSSLError, [e2e-predictor] CertificateError, [e2e-predictor] SSLError, [e2e-predictor] ) as e: [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] # If the connection didn't successfully connect to it's proxy [e2e-predictor] # then there [e2e-predictor] if isinstance( [e2e-predictor] new_e, (OSError, NewConnectionError, TimeoutError, SSLError) [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] > raise new_e [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:488: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] timeout = Timeout(connect=None, read=None, total=None), chunked = False [e2e-predictor] response_conn = None, preload_content = True, decode_content = True [e2e-predictor] enforce_content_length = True [e2e-predictor] [e2e-predictor] def _make_request( [e2e-predictor] self, [e2e-predictor] conn: BaseHTTPConnection, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | None = None, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] chunked: bool = False, [e2e-predictor] response_conn: BaseHTTPConnection | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] enforce_content_length: bool = True, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Perform a request on a given urllib connection object taken from our [e2e-predictor] pool. [e2e-predictor] [e2e-predictor] :param conn: [e2e-predictor] a connection from one of our connection pools [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] Pass ``None`` to retry until you receive a response. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param response_conn: [e2e-predictor] Set this to ``None`` if you will handle releasing the connection or [e2e-predictor] set the connection to have the response release it. [e2e-predictor] [e2e-predictor] :param preload_content: [e2e-predictor] If True, the response's body will be preloaded during construction. [e2e-predictor] [e2e-predictor] :param decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param enforce_content_length: [e2e-predictor] Enforce content length checking. Body returned by server must match [e2e-predictor] value of Content-Length header, if present. Otherwise, raise error. [e2e-predictor] """ [e2e-predictor] self.num_requests += 1 [e2e-predictor] [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] timeout_obj.start_connect() [e2e-predictor] conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Trigger any extra validation we need to do. [e2e-predictor] try: [e2e-predictor] > self._validate_conn(conn) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:464: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def _validate_conn(self, conn: BaseHTTPConnection) -> None: [e2e-predictor] """ [e2e-predictor] Called right before a request is made, after the socket is created. [e2e-predictor] """ [e2e-predictor] super()._validate_conn(conn) [e2e-predictor] [e2e-predictor] # Force connect early to allow us to validate the connection. [e2e-predictor] if conn.is_closed: [e2e-predictor] > conn.connect() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:1093: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def connect(self) -> None: [e2e-predictor] # Today we don't need to be doing this step before the /actual/ socket [e2e-predictor] # connection, however in the future we'll need to decide whether to [e2e-predictor] # create a new socket or re-use an existing "shared" socket as a part [e2e-predictor] # of the HTTP/2 handshake dance. [e2e-predictor] if self._tunnel_host is not None and self._tunnel_port is not None: [e2e-predictor] probe_http2_host = self._tunnel_host [e2e-predictor] probe_http2_port = self._tunnel_port [e2e-predictor] else: [e2e-predictor] probe_http2_host = self.host [e2e-predictor] probe_http2_port = self.port [e2e-predictor] [e2e-predictor] # Check if the target origin supports HTTP/2. [e2e-predictor] # If the value comes back as 'None' it means that the current thread [e2e-predictor] # is probing for HTTP/2 support. Otherwise, we're waiting for another [e2e-predictor] # probe to complete, or we get a value right away. [e2e-predictor] target_supports_http2: bool | None [e2e-predictor] if "h2" in ssl_.ALPN_PROTOCOLS: [e2e-predictor] target_supports_http2 = http2_probe.acquire_and_get( [e2e-predictor] host=probe_http2_host, port=probe_http2_port [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] # If HTTP/2 isn't going to be offered it doesn't matter if [e2e-predictor] # the target supports HTTP/2. Don't want to make a probe. [e2e-predictor] target_supports_http2 = False [e2e-predictor] [e2e-predictor] if self._connect_callback is not None: [e2e-predictor] self._connect_callback( [e2e-predictor] "before connect", [e2e-predictor] thread_id=threading.get_ident(), [e2e-predictor] target_supports_http2=target_supports_http2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] sock: socket.socket | ssl.SSLSocket [e2e-predictor] > self.sock = sock = self._new_conn() [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:759: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def _new_conn(self) -> socket.socket: [e2e-predictor] """Establish a socket connection and set nodelay settings on it. [e2e-predictor] [e2e-predictor] :return: New socket connection. [e2e-predictor] """ [e2e-predictor] try: [e2e-predictor] sock = connection.create_connection( [e2e-predictor] (self._dns_host, self.port), [e2e-predictor] self.timeout, [e2e-predictor] source_address=self.source_address, [e2e-predictor] socket_options=self.socket_options, [e2e-predictor] ) [e2e-predictor] except socket.gaierror as e: [e2e-predictor] > raise NameResolutionError(self.host, self, e) from e [e2e-predictor] E urllib3.exceptions.NameResolutionError: HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connection.py:211: NameResolutionError [e2e-predictor] [e2e-predictor] The above exception was the direct cause of the following exception: [e2e-predictor] [e2e-predictor] kserve_client = [e2e-predictor] [e2e-predictor] @pytest.mark.kserve_on_openshift [e2e-predictor] def test_s3_tls_serving_cert_storagespec_kserve(kserve_client): [e2e-predictor] # Validate that the model is successfully loaded when the serving cert is valid [e2e-predictor] pass_storage_config = create_storage_config_json( [e2e-predictor] "seaweedfs-tls-serving-service", "odh-kserve-custom-ca-bundle" [e2e-predictor] ) [e2e-predictor] pass_service_name = "isvc-sklearn-s3-tls-serving-pass" [e2e-predictor] pass_isvc = create_isvc_resource(pass_service_name, storage_key="localTLSS3Serving") [e2e-predictor] > with managed_storage_config_key( [e2e-predictor] kserve_client, "localTLSS3Serving", pass_storage_config [e2e-predictor] ): [e2e-predictor] [e2e-predictor] storagespec/test_s3_tls_storagespec.py:286: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] [e2e-predictor] def __enter__(self): [e2e-predictor] # do not keep args and kwds alive unnecessarily [e2e-predictor] # they are only needed for recreation, which is not possible anymore [e2e-predictor] del self.args, self.kwds, self.func [e2e-predictor] try: [e2e-predictor] > return next(self.gen) [e2e-predictor] [e2e-predictor] /usr/lib64/python3.11/contextlib.py:137: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] kserve_client = [e2e-predictor] storage_key = 'localTLSS3Serving' [e2e-predictor] storage_config = {'access_key_id': 's3admin', 'anonymous': 'False', 'bucket': 'mlpipeline', 'cabundle_configmap': 'odh-kserve-custom-ca-bundle', ...} [e2e-predictor] namespace = 'kserve-ci-e2e-test' [e2e-predictor] [e2e-predictor] @contextmanager [e2e-predictor] def managed_storage_config_key( [e2e-predictor] kserve_client: KServeClient, [e2e-predictor] storage_key: str, [e2e-predictor] storage_config: dict[str, Any], [e2e-predictor] namespace: str = KSERVE_TEST_NAMESPACE, [e2e-predictor] ): [e2e-predictor] secret_name = "storage-config" [e2e-predictor] encoded_value = b64encode(json.dumps(storage_config).encode()).decode() [e2e-predictor] # Patch to ADD the key (preserves other keys) [e2e-predictor] > kserve_client.core_api.patch_namespaced_secret( [e2e-predictor] secret_name, [e2e-predictor] namespace=namespace, [e2e-predictor] body={"data": {storage_key: encoded_value}}, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] storagespec/test_s3_tls_storagespec.py:128: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'storage-config', namespace = 'kserve-ci-e2e-test' [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] [e2e-predictor] def patch_namespaced_secret(self, name, namespace, body, **kwargs): # noqa: E501 [e2e-predictor] """patch_namespaced_secret # noqa: E501 [e2e-predictor] [e2e-predictor] partially update the specified Secret # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.patch_namespaced_secret(name, namespace, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the Secret (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param object body: (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. [e2e-predictor] :param bool force: Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: V1Secret [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] kwargs['_return_http_data_only'] = True [e2e-predictor] > return self.patch_namespaced_secret_with_http_info(name, namespace, body, **kwargs) # noqa: E501 [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:21627: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] name = 'storage-config', namespace = 'kserve-ci-e2e-test' [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] kwargs = {'_return_http_data_only': True} [e2e-predictor] local_var_params = {'_return_http_data_only': True, 'all_params': ['name', 'namespace', 'body', 'pretty', 'dry_run', 'field_manager', ......91dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}}, ...} [e2e-predictor] all_params = ['name', 'namespace', 'body', 'pretty', 'dry_run', 'field_manager', ...] [e2e-predictor] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-predictor] path_params = {'name': 'storage-config', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] [e2e-predictor] def patch_namespaced_secret_with_http_info(self, name, namespace, body, **kwargs): # noqa: E501 [e2e-predictor] """patch_namespaced_secret # noqa: E501 [e2e-predictor] [e2e-predictor] partially update the specified Secret # noqa: E501 [e2e-predictor] This method makes a synchronous HTTP request by default. To make an [e2e-predictor] asynchronous HTTP request, please pass async_req=True [e2e-predictor] >>> thread = api.patch_namespaced_secret_with_http_info(name, namespace, body, async_req=True) [e2e-predictor] >>> result = thread.get() [e2e-predictor] [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param str name: name of the Secret (required) [e2e-predictor] :param str namespace: object name and auth scope, such as for teams and projects (required) [e2e-predictor] :param object body: (required) [e2e-predictor] :param str pretty: If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). [e2e-predictor] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-predictor] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). [e2e-predictor] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. [e2e-predictor] :param bool force: Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: tuple(V1Secret, status_code(int), headers(HTTPHeaderDict)) [e2e-predictor] If the method is called asynchronously, [e2e-predictor] returns the request thread. [e2e-predictor] """ [e2e-predictor] [e2e-predictor] local_var_params = locals() [e2e-predictor] [e2e-predictor] all_params = [ [e2e-predictor] 'name', [e2e-predictor] 'namespace', [e2e-predictor] 'body', [e2e-predictor] 'pretty', [e2e-predictor] 'dry_run', [e2e-predictor] 'field_manager', [e2e-predictor] 'field_validation', [e2e-predictor] 'force' [e2e-predictor] ] [e2e-predictor] all_params.extend( [e2e-predictor] [ [e2e-predictor] 'async_req', [e2e-predictor] '_return_http_data_only', [e2e-predictor] '_preload_content', [e2e-predictor] '_request_timeout' [e2e-predictor] ] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-predictor] if key not in all_params: [e2e-predictor] raise ApiTypeError( [e2e-predictor] "Got an unexpected keyword argument '%s'" [e2e-predictor] " to method patch_namespaced_secret" % key [e2e-predictor] ) [e2e-predictor] local_var_params[key] = val [e2e-predictor] del local_var_params['kwargs'] [e2e-predictor] # verify the required parameter 'name' is set [e2e-predictor] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['name'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `name` when calling `patch_namespaced_secret`") # noqa: E501 [e2e-predictor] # verify the required parameter 'namespace' is set [e2e-predictor] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['namespace'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `namespace` when calling `patch_namespaced_secret`") # noqa: E501 [e2e-predictor] # verify the required parameter 'body' is set [e2e-predictor] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-predictor] local_var_params['body'] is None): # noqa: E501 [e2e-predictor] raise ApiValueError("Missing the required parameter `body` when calling `patch_namespaced_secret`") # noqa: E501 [e2e-predictor] [e2e-predictor] collection_formats = {} [e2e-predictor] [e2e-predictor] path_params = {} [e2e-predictor] if 'name' in local_var_params: [e2e-predictor] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-predictor] if 'namespace' in local_var_params: [e2e-predictor] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-predictor] [e2e-predictor] query_params = [] [e2e-predictor] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-predictor] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-predictor] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-predictor] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-predictor] if 'force' in local_var_params and local_var_params['force'] is not None: # noqa: E501 [e2e-predictor] query_params.append(('force', local_var_params['force'])) # noqa: E501 [e2e-predictor] [e2e-predictor] header_params = {} [e2e-predictor] [e2e-predictor] form_params = [] [e2e-predictor] local_var_files = {} [e2e-predictor] [e2e-predictor] body_params = None [e2e-predictor] if 'body' in local_var_params: [e2e-predictor] body_params = local_var_params['body'] [e2e-predictor] # HTTP header `Accept` [e2e-predictor] header_params['Accept'] = self.api_client.select_header_accept( [e2e-predictor] ['application/json', 'application/yaml', 'application/vnd.kubernetes.protobuf', 'application/cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # HTTP header `Content-Type` [e2e-predictor] header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 [e2e-predictor] ['application/json-patch+json', 'application/merge-patch+json', 'application/strategic-merge-patch+json', 'application/apply-patch+yaml', 'application/apply-patch+cbor']) # noqa: E501 [e2e-predictor] [e2e-predictor] # Authentication setting [e2e-predictor] auth_settings = ['BearerToken'] # noqa: E501 [e2e-predictor] [e2e-predictor] > return self.api_client.call_api( [e2e-predictor] '/api/v1/namespaces/{namespace}/secrets/{name}', 'PATCH', [e2e-predictor] path_params, [e2e-predictor] query_params, [e2e-predictor] header_params, [e2e-predictor] body=body_params, [e2e-predictor] post_params=form_params, [e2e-predictor] files=local_var_files, [e2e-predictor] response_type='V1Secret', # noqa: E501 [e2e-predictor] auth_settings=auth_settings, [e2e-predictor] async_req=local_var_params.get('async_req'), [e2e-predictor] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-predictor] _preload_content=local_var_params.get('_preload_content', True), [e2e-predictor] _request_timeout=local_var_params.get('_request_timeout'), [e2e-predictor] collection_formats=collection_formats) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/core_v1_api.py:21742: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/{namespace}/secrets/{name}' [e2e-predictor] method = 'PATCH' [e2e-predictor] path_params = {'name': 'storage-config', 'namespace': 'kserve-ci-e2e-test'} [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] post_params = [], files = {}, response_type = 'V1Secret' [e2e-predictor] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def call_api(self, resource_path, method, [e2e-predictor] path_params=None, query_params=None, header_params=None, [e2e-predictor] body=None, post_params=None, files=None, [e2e-predictor] response_type=None, auth_settings=None, async_req=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-predictor] [e2e-predictor] To make an async_req request, set the async_req parameter. [e2e-predictor] [e2e-predictor] :param resource_path: Path to method endpoint. [e2e-predictor] :param method: Method to call. [e2e-predictor] :param path_params: Path parameters in the url. [e2e-predictor] :param query_params: Query parameters in the url. [e2e-predictor] :param header_params: Header parameters to be [e2e-predictor] placed in the request header. [e2e-predictor] :param body: Request body. [e2e-predictor] :param post_params dict: Request post form parameters, [e2e-predictor] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-predictor] :param auth_settings list: Auth Settings names for the request. [e2e-predictor] :param response: Response data type. [e2e-predictor] :param files dict: key -> filename, value -> filepath, [e2e-predictor] for `multipart/form-data`. [e2e-predictor] :param async_req bool: execute request asynchronously [e2e-predictor] :param _return_http_data_only: response data without head status code [e2e-predictor] and headers [e2e-predictor] :param collection_formats: dict of collection formats for path, query, [e2e-predictor] header, and post parameters. [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] :return: [e2e-predictor] If async_req parameter is True, [e2e-predictor] the request will be called asynchronously. [e2e-predictor] The method will return the request thread. [e2e-predictor] If parameter async_req is False or missing, [e2e-predictor] then the method will return the response directly. [e2e-predictor] """ [e2e-predictor] if not async_req: [e2e-predictor] > return self.__call_api(resource_path, method, [e2e-predictor] path_params, query_params, header_params, [e2e-predictor] body, post_params, files, [e2e-predictor] response_type, auth_settings, [e2e-predictor] _return_http_data_only, collection_formats, [e2e-predictor] _preload_content, _request_timeout, _host) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] resource_path = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] method = 'PATCH' [e2e-predictor] path_params = [('name', 'storage-config'), ('namespace', 'kserve-ci-e2e-test')] [e2e-predictor] query_params = [] [e2e-predictor] header_params = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] post_params = [], files = {}, response_type = 'V1Secret' [e2e-predictor] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-predictor] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-predictor] _host = None [e2e-predictor] [e2e-predictor] def __call_api( [e2e-predictor] self, resource_path, method, path_params=None, [e2e-predictor] query_params=None, header_params=None, body=None, post_params=None, [e2e-predictor] files=None, response_type=None, auth_settings=None, [e2e-predictor] _return_http_data_only=None, collection_formats=None, [e2e-predictor] _preload_content=True, _request_timeout=None, _host=None): [e2e-predictor] [e2e-predictor] config = self.configuration [e2e-predictor] [e2e-predictor] # header parameters [e2e-predictor] header_params = header_params or {} [e2e-predictor] header_params.update(self.default_headers) [e2e-predictor] if self.cookie: [e2e-predictor] header_params['Cookie'] = self.cookie [e2e-predictor] if header_params: [e2e-predictor] header_params = self.sanitize_for_serialization(header_params) [e2e-predictor] header_params = dict(self.parameters_to_tuples(header_params, [e2e-predictor] collection_formats)) [e2e-predictor] [e2e-predictor] # path parameters [e2e-predictor] if path_params: [e2e-predictor] path_params = self.sanitize_for_serialization(path_params) [e2e-predictor] path_params = self.parameters_to_tuples(path_params, [e2e-predictor] collection_formats) [e2e-predictor] for k, v in path_params: [e2e-predictor] # specified safe chars, encode everything [e2e-predictor] resource_path = resource_path.replace( [e2e-predictor] '{%s}' % k, [e2e-predictor] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # query parameters [e2e-predictor] if query_params: [e2e-predictor] query_params = self.sanitize_for_serialization(query_params) [e2e-predictor] query_params = self.parameters_to_tuples(query_params, [e2e-predictor] collection_formats) [e2e-predictor] [e2e-predictor] # post parameters [e2e-predictor] if post_params or files: [e2e-predictor] post_params = post_params if post_params else [] [e2e-predictor] post_params = self.sanitize_for_serialization(post_params) [e2e-predictor] post_params = self.parameters_to_tuples(post_params, [e2e-predictor] collection_formats) [e2e-predictor] post_params.extend(self.files_parameters(files)) [e2e-predictor] [e2e-predictor] # auth setting [e2e-predictor] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-predictor] [e2e-predictor] # body [e2e-predictor] if body: [e2e-predictor] body = self.sanitize_for_serialization(body) [e2e-predictor] [e2e-predictor] # request url [e2e-predictor] if _host is None: [e2e-predictor] url = self.configuration.host + resource_path [e2e-predictor] else: [e2e-predictor] # use server/host defined in path or operation instead [e2e-predictor] url = _host + resource_path [e2e-predictor] [e2e-predictor] # perform request and return response [e2e-predictor] > response_data = self.request( [e2e-predictor] method, url, query_params=query_params, headers=header_params, [e2e-predictor] post_params=post_params, body=body, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] post_params = [] [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] post_params=None, body=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Makes the HTTP request using RESTClient.""" [e2e-predictor] if method == "GET": [e2e-predictor] return self.rest_client.GET(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "HEAD": [e2e-predictor] return self.rest_client.HEAD(url, [e2e-predictor] query_params=query_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] headers=headers) [e2e-predictor] elif method == "OPTIONS": [e2e-predictor] return self.rest_client.OPTIONS(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout) [e2e-predictor] elif method == "POST": [e2e-predictor] return self.rest_client.POST(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PUT": [e2e-predictor] return self.rest_client.PUT(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] elif method == "PATCH": [e2e-predictor] > return self.rest_client.PATCH(url, [e2e-predictor] query_params=query_params, [e2e-predictor] headers=headers, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:407: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] query_params = [], post_params = [] [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def PATCH(self, url, headers=None, query_params=None, post_params=None, [e2e-predictor] body=None, _preload_content=True, _request_timeout=None): [e2e-predictor] > return self.request("PATCH", url, [e2e-predictor] headers=headers, [e2e-predictor] query_params=query_params, [e2e-predictor] post_params=post_params, [e2e-predictor] _preload_content=_preload_content, [e2e-predictor] _request_timeout=_request_timeout, [e2e-predictor] body=body) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:299: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] query_params = [] [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] body = {'data': {'localTLSS3Serving': 'eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6IC...dXMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=='}} [e2e-predictor] post_params = {}, _preload_content = True, _request_timeout = None [e2e-predictor] [e2e-predictor] def request(self, method, url, query_params=None, headers=None, [e2e-predictor] body=None, post_params=None, _preload_content=True, [e2e-predictor] _request_timeout=None): [e2e-predictor] """Perform requests. [e2e-predictor] [e2e-predictor] :param method: http request method [e2e-predictor] :param url: http request url [e2e-predictor] :param query_params: query parameters in the url [e2e-predictor] :param headers: http request headers [e2e-predictor] :param body: request json body, for `application/json` [e2e-predictor] :param post_params: request post parameters, [e2e-predictor] `application/x-www-form-urlencoded` [e2e-predictor] and `multipart/form-data` [e2e-predictor] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-predictor] be returned without reading/decoding response [e2e-predictor] data. Default is True. [e2e-predictor] :param _request_timeout: timeout setting for this request. If one [e2e-predictor] number provided, it will be total request [e2e-predictor] timeout. It can also be a pair (tuple) of [e2e-predictor] (connection, read) timeouts. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-predictor] 'PATCH', 'OPTIONS'] [e2e-predictor] [e2e-predictor] if post_params and body: [e2e-predictor] raise ApiValueError( [e2e-predictor] "body parameter cannot be used with post_params parameter." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] post_params = post_params or {} [e2e-predictor] headers = headers or {} [e2e-predictor] [e2e-predictor] timeout = None [e2e-predictor] if _request_timeout: [e2e-predictor] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-predictor] timeout = urllib3.Timeout(total=_request_timeout) [e2e-predictor] elif (isinstance(_request_timeout, tuple) and [e2e-predictor] len(_request_timeout) == 2): [e2e-predictor] timeout = urllib3.Timeout( [e2e-predictor] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-predictor] [e2e-predictor] if 'Content-Type' not in headers: [e2e-predictor] headers['Content-Type'] = 'application/json' [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-predictor] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-predictor] if query_params: [e2e-predictor] url += '?' + urlencode(query_params) [e2e-predictor] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-predictor] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-predictor] if headers['Content-Type'] == 'application/json-patch+json': [e2e-predictor] if not isinstance(body, list): [e2e-predictor] headers['Content-Type'] = \ [e2e-predictor] 'application/strategic-merge-patch+json' [e2e-predictor] request_body = None [e2e-predictor] if body is not None: [e2e-predictor] request_body = json.dumps(body) [e2e-predictor] > r = self.pool_manager.request( [e2e-predictor] method, url, [e2e-predictor] body=request_body, [e2e-predictor] preload_content=_preload_content, [e2e-predictor] timeout=timeout, [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:172: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] json = None [e2e-predictor] urlopen_kw = {'body': '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNz...ImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}', 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] json: typing.Any | None = None, [e2e-predictor] **urlopen_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the appropriate encoding of [e2e-predictor] ``fields`` based on the ``method`` used. [e2e-predictor] [e2e-predictor] This is a convenience method that requires the least amount of manual [e2e-predictor] effort. It can be used in most situations, while still having the [e2e-predictor] option to drop down to more specific methods when necessary, such as [e2e-predictor] :meth:`request_encode_url`, :meth:`request_encode_body`, [e2e-predictor] or even the lowest level :meth:`urlopen`. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the URL or request body, depending on ``method``. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param json: [e2e-predictor] Data to encode and send as JSON with UTF-encoded in the request body. [e2e-predictor] The ``"Content-Type"`` header will be set to ``"application/json"`` [e2e-predictor] unless specified otherwise. [e2e-predictor] """ [e2e-predictor] method = method.upper() [e2e-predictor] [e2e-predictor] if json is not None and body is not None: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'body' and 'json' parameters which are mutually exclusive" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if json is not None: [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not ("content-type" in map(str.lower, headers.keys())): [e2e-predictor] headers = HTTPHeaderDict(headers) [e2e-predictor] headers["Content-Type"] = "application/json" [e2e-predictor] [e2e-predictor] body = _json.dumps(json, separators=(",", ":"), ensure_ascii=False).encode( [e2e-predictor] "utf-8" [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if body is not None: [e2e-predictor] urlopen_kw["body"] = body [e2e-predictor] [e2e-predictor] if method in self._encode_url_methods: [e2e-predictor] return self.request_encode_url( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] fields=fields, # type: ignore[arg-type] [e2e-predictor] headers=headers, [e2e-predictor] **urlopen_kw, [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] > return self.request_encode_body( [e2e-predictor] method, url, fields=fields, headers=headers, **urlopen_kw [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:143: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] fields = None [e2e-predictor] headers = {'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-predictor] encode_multipart = True, multipart_boundary = None [e2e-predictor] urlopen_kw = {'body': '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNz...ImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}', 'preload_content': True, 'timeout': None} [e2e-predictor] extra_kw = {'body': '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNz...rategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}), 'preload_content': True, 'timeout': None} [e2e-predictor] [e2e-predictor] def request_encode_body( [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] fields: _TYPE_FIELDS | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] encode_multipart: bool = True, [e2e-predictor] multipart_boundary: str | None = None, [e2e-predictor] **urlopen_kw: str, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Make a request using :meth:`urlopen` with the ``fields`` encoded in [e2e-predictor] the body. This is useful for request methods like POST, PUT, PATCH, etc. [e2e-predictor] [e2e-predictor] When ``encode_multipart=True`` (default), then [e2e-predictor] :func:`urllib3.encode_multipart_formdata` is used to encode [e2e-predictor] the payload with the appropriate content type. Otherwise [e2e-predictor] :func:`urllib.parse.urlencode` is used with the [e2e-predictor] 'application/x-www-form-urlencoded' content type. [e2e-predictor] [e2e-predictor] Multipart encoding must be used when posting files, and it's reasonably [e2e-predictor] safe to use it in other times too. However, it may break request [e2e-predictor] signing, such as with OAuth. [e2e-predictor] [e2e-predictor] Supports an optional ``fields`` parameter of key/value strings AND [e2e-predictor] key/filetuple. A filetuple is a (filename, data, MIME type) tuple where [e2e-predictor] the MIME type is optional. For example:: [e2e-predictor] [e2e-predictor] fields = { [e2e-predictor] 'foo': 'bar', [e2e-predictor] 'fakefile': ('foofile.txt', 'contents of foofile'), [e2e-predictor] 'realfile': ('barfile.txt', open('realfile').read()), [e2e-predictor] 'typedfile': ('bazfile.bin', open('bazfile').read(), [e2e-predictor] 'image/jpeg'), [e2e-predictor] 'nonamefile': 'contents of nonamefile field', [e2e-predictor] } [e2e-predictor] [e2e-predictor] When uploading a file, providing a filename (the first parameter of the [e2e-predictor] tuple) is optional but recommended to best mimic behavior of browsers. [e2e-predictor] [e2e-predictor] Note that if ``headers`` are supplied, the 'Content-Type' header will [e2e-predictor] be overwritten because it depends on the dynamic random boundary string [e2e-predictor] which is used to compose the body of the request. The random boundary [e2e-predictor] string can be explicitly set with the ``multipart_boundary`` parameter. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param fields: [e2e-predictor] Data to encode and send in the request body. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param encode_multipart: [e2e-predictor] If True, encode the ``fields`` using the multipart/form-data MIME [e2e-predictor] format. [e2e-predictor] [e2e-predictor] :param multipart_boundary: [e2e-predictor] If not specified, then a random boundary will be generated using [e2e-predictor] :func:`urllib3.filepost.choose_boundary`. [e2e-predictor] """ [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] extra_kw: dict[str, typing.Any] = {"headers": HTTPHeaderDict(headers)} [e2e-predictor] body: bytes | str [e2e-predictor] [e2e-predictor] if fields: [e2e-predictor] if "body" in urlopen_kw: [e2e-predictor] raise TypeError( [e2e-predictor] "request got values for both 'fields' and 'body', can only specify one." [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if encode_multipart: [e2e-predictor] body, content_type = encode_multipart_formdata( [e2e-predictor] fields, boundary=multipart_boundary [e2e-predictor] ) [e2e-predictor] else: [e2e-predictor] body, content_type = ( [e2e-predictor] urlencode(fields), # type: ignore[arg-type] [e2e-predictor] "application/x-www-form-urlencoded", [e2e-predictor] ) [e2e-predictor] [e2e-predictor] extra_kw["body"] = body [e2e-predictor] extra_kw["headers"].setdefault("Content-Type", content_type) [e2e-predictor] [e2e-predictor] extra_kw.update(urlopen_kw) [e2e-predictor] [e2e-predictor] > return self.urlopen(method, url, **extra_kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/_request_methods.py:278: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = 'https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] redirect = True [e2e-predictor] kw = {'assert_same_host': False, 'body': '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZ...plication/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}), 'preload_content': True, ...} [e2e-predictor] u = Url(scheme='https', auth=None, host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] conn = [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, method: str, url: str, redirect: bool = True, **kw: typing.Any [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Same as :meth:`urllib3.HTTPConnectionPool.urlopen` [e2e-predictor] with custom cross-host redirect logic and only sends the request-uri [e2e-predictor] portion of the ``url``. [e2e-predictor] [e2e-predictor] The given ``url`` parameter must be absolute, such that an appropriate [e2e-predictor] :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. [e2e-predictor] """ [e2e-predictor] u = parse_url(url) [e2e-predictor] [e2e-predictor] if u.scheme is None: [e2e-predictor] warnings.warn( [e2e-predictor] "URLs without a scheme (ie 'https://') are deprecated and will raise an error " [e2e-predictor] "in a future version of urllib3. To avoid this DeprecationWarning ensure all URLs " [e2e-predictor] "start with 'https://' or 'http://'. Read more in this issue: " [e2e-predictor] "https://github.com/urllib3/urllib3/issues/2920", [e2e-predictor] category=DeprecationWarning, [e2e-predictor] stacklevel=2, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) [e2e-predictor] [e2e-predictor] kw["assert_same_host"] = False [e2e-predictor] kw["redirect"] = False [e2e-predictor] [e2e-predictor] if "headers" not in kw: [e2e-predictor] kw["headers"] = self.headers [e2e-predictor] [e2e-predictor] if self._proxy_requires_url_absolute_form(u): [e2e-predictor] response = conn.urlopen(method, url, **kw) [e2e-predictor] else: [e2e-predictor] > response = conn.urlopen(method, u.request_uri, **kw) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/poolmanager.py:457: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=2, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=1, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False [e2e-predictor] err = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] retries.sleep() [e2e-predictor] [e2e-predictor] # Keep track of the error for the retry warning. [e2e-predictor] err = e [e2e-predictor] [e2e-predictor] finally: [e2e-predictor] if not clean_exit: [e2e-predictor] # We hit some kind of exception, handled or otherwise. We need [e2e-predictor] # to throw the connection away unless explicitly told not to. [e2e-predictor] # Close the connection, set the variable to None, and make sure [e2e-predictor] # we put the None back in the pool to avoid leaking it. [e2e-predictor] if conn: [e2e-predictor] conn.close() [e2e-predictor] conn = None [e2e-predictor] release_this_conn = True [e2e-predictor] [e2e-predictor] if release_this_conn: [e2e-predictor] # Put the connection back to be reused. If the connection is [e2e-predictor] # expired then it will be None, which will get replaced with a [e2e-predictor] # fresh connection during _get_conn. [e2e-predictor] self._put_conn(conn) [e2e-predictor] [e2e-predictor] if not conn: [e2e-predictor] # Try again [e2e-predictor] log.warning( [e2e-predictor] "Retrying (%r) after connection broken by '%r': %s", retries, err, url [e2e-predictor] ) [e2e-predictor] > return self.urlopen( [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] body, [e2e-predictor] headers, [e2e-predictor] retries, [e2e-predictor] redirect, [e2e-predictor] assert_same_host, [e2e-predictor] timeout=timeout, [e2e-predictor] pool_timeout=pool_timeout, [e2e-predictor] release_conn=release_conn, [e2e-predictor] chunked=chunked, [e2e-predictor] body_pos=body_pos, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:871: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] body = '{"data": {"localTLSS3Serving": "eyJ0eXBlIjogInMzIiwgImFjY2Vzc19rZXlfaWQiOiAiczNhZG1pbiIsICJzZWNyZXRfYWNjZXNzX2tleSI6I...XMtc291dGgiLCAiYW5vbnltb3VzIjogIkZhbHNlIiwgImNhYnVuZGxlX2NvbmZpZ21hcCI6ICJvZGgta3NlcnZlLWN1c3RvbS1jYS1idW5kbGUifQ=="}}' [e2e-predictor] headers = HTTPHeaderDict({'Accept': 'application/json', 'Content-Type': 'application/strategic-merge-patch+json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'}) [e2e-predictor] retries = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] redirect = False, assert_same_host = False, timeout = None, pool_timeout = None [e2e-predictor] release_conn = True, chunked = False, body_pos = None, preload_content = True [e2e-predictor] decode_content = True, response_kw = {} [e2e-predictor] parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config', query=None, fragment=None) [e2e-predictor] destination_scheme = None, conn = None, release_this_conn = True [e2e-predictor] http_tunnel_required = False, err = None, clean_exit = False [e2e-predictor] [e2e-predictor] def urlopen( # type: ignore[override] [e2e-predictor] self, [e2e-predictor] method: str, [e2e-predictor] url: str, [e2e-predictor] body: _TYPE_BODY | None = None, [e2e-predictor] headers: typing.Mapping[str, str] | None = None, [e2e-predictor] retries: Retry | bool | int | None = None, [e2e-predictor] redirect: bool = True, [e2e-predictor] assert_same_host: bool = True, [e2e-predictor] timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, [e2e-predictor] pool_timeout: int | None = None, [e2e-predictor] release_conn: bool | None = None, [e2e-predictor] chunked: bool = False, [e2e-predictor] body_pos: _TYPE_BODY_POSITION | None = None, [e2e-predictor] preload_content: bool = True, [e2e-predictor] decode_content: bool = True, [e2e-predictor] **response_kw: typing.Any, [e2e-predictor] ) -> BaseHTTPResponse: [e2e-predictor] """ [e2e-predictor] Get a connection from the pool and perform an HTTP request. This is the [e2e-predictor] lowest level call for making a request, so you'll need to specify all [e2e-predictor] the raw details. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] More commonly, it's appropriate to use a convenience method [e2e-predictor] such as :meth:`request`. [e2e-predictor] [e2e-predictor] .. note:: [e2e-predictor] [e2e-predictor] `release_conn` will only behave as expected if [e2e-predictor] `preload_content=False` because we want to make [e2e-predictor] `preload_content=False` the default behaviour someday soon without [e2e-predictor] breaking backwards compatibility. [e2e-predictor] [e2e-predictor] :param method: [e2e-predictor] HTTP request method (such as GET, POST, PUT, etc.) [e2e-predictor] [e2e-predictor] :param url: [e2e-predictor] The URL to perform the request on. [e2e-predictor] [e2e-predictor] :param body: [e2e-predictor] Data to send in the request body, either :class:`str`, :class:`bytes`, [e2e-predictor] an iterable of :class:`str`/:class:`bytes`, or a file-like object. [e2e-predictor] [e2e-predictor] :param headers: [e2e-predictor] Dictionary of custom headers to send, such as User-Agent, [e2e-predictor] If-None-Match, etc. If None, pool headers are used. If provided, [e2e-predictor] these headers completely replace any pool-specific headers. [e2e-predictor] [e2e-predictor] :param retries: [e2e-predictor] Configure the number of retries to allow before raising a [e2e-predictor] :class:`~urllib3.exceptions.MaxRetryError` exception. [e2e-predictor] [e2e-predictor] If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a [e2e-predictor] :class:`~urllib3.util.retry.Retry` object for fine-grained control [e2e-predictor] over different types of retries. [e2e-predictor] Pass an integer number to retry connection errors that many times, [e2e-predictor] but no other types of errors. Pass zero to never retry. [e2e-predictor] [e2e-predictor] If ``False``, then retries are disabled and any exception is raised [e2e-predictor] immediately. Also, instead of raising a MaxRetryError on redirects, [e2e-predictor] the redirect response will be returned. [e2e-predictor] [e2e-predictor] :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. [e2e-predictor] [e2e-predictor] :param redirect: [e2e-predictor] If True, automatically handle redirects (status codes 301, 302, [e2e-predictor] 303, 307, 308). Each redirect counts as a retry. Disabling retries [e2e-predictor] will disable redirect, too. [e2e-predictor] [e2e-predictor] :param assert_same_host: [e2e-predictor] If ``True``, will make sure that the host of the pool requests is [e2e-predictor] consistent else will raise HostChangedError. When ``False``, you can [e2e-predictor] use the pool on an HTTP proxy and request foreign hosts. [e2e-predictor] [e2e-predictor] :param timeout: [e2e-predictor] If specified, overrides the default timeout for this one [e2e-predictor] request. It may be a float (in seconds) or an instance of [e2e-predictor] :class:`urllib3.util.Timeout`. [e2e-predictor] [e2e-predictor] :param pool_timeout: [e2e-predictor] If set and the pool is set to block=True, then this method will [e2e-predictor] block for ``pool_timeout`` seconds and raise EmptyPoolError if no [e2e-predictor] connection is available within the time period. [e2e-predictor] [e2e-predictor] :param bool preload_content: [e2e-predictor] If True, the response's body will be preloaded into memory. [e2e-predictor] [e2e-predictor] :param bool decode_content: [e2e-predictor] If True, will attempt to decode the body based on the [e2e-predictor] 'content-encoding' header. [e2e-predictor] [e2e-predictor] :param release_conn: [e2e-predictor] If False, then the urlopen call will not release the connection [e2e-predictor] back into the pool once a response is received (but will release if [e2e-predictor] you read the entire contents of the response such as when [e2e-predictor] `preload_content=True`). This is useful if you're not preloading [e2e-predictor] the response's content immediately. You will need to call [e2e-predictor] ``r.release_conn()`` on the response ``r`` to return the connection [e2e-predictor] back into the pool. If None, it takes the value of ``preload_content`` [e2e-predictor] which defaults to ``True``. [e2e-predictor] [e2e-predictor] :param bool chunked: [e2e-predictor] If True, urllib3 will send the body using chunked transfer [e2e-predictor] encoding. Otherwise, urllib3 will send the body using the standard [e2e-predictor] content-length form. Defaults to False. [e2e-predictor] [e2e-predictor] :param int body_pos: [e2e-predictor] Position to seek to in file-like body in the event of a retry or [e2e-predictor] redirect. Typically this won't need to be set because urllib3 will [e2e-predictor] auto-populate the value when needed. [e2e-predictor] """ [e2e-predictor] parsed_url = parse_url(url) [e2e-predictor] destination_scheme = parsed_url.scheme [e2e-predictor] [e2e-predictor] if headers is None: [e2e-predictor] headers = self.headers [e2e-predictor] [e2e-predictor] if not isinstance(retries, Retry): [e2e-predictor] retries = Retry.from_int(retries, redirect=redirect, default=self.retries) [e2e-predictor] [e2e-predictor] if release_conn is None: [e2e-predictor] release_conn = preload_content [e2e-predictor] [e2e-predictor] # Check host [e2e-predictor] if assert_same_host and not self.is_same_host(url): [e2e-predictor] raise HostChangedError(self, url, retries) [e2e-predictor] [e2e-predictor] # Ensure that the URL we're connecting to is properly encoded [e2e-predictor] if url.startswith("/"): [e2e-predictor] url = to_str(_encode_target(url)) [e2e-predictor] else: [e2e-predictor] url = to_str(parsed_url.url) [e2e-predictor] [e2e-predictor] conn = None [e2e-predictor] [e2e-predictor] # Track whether `conn` needs to be released before [e2e-predictor] # returning/raising/recursing. Update this variable if necessary, and [e2e-predictor] # leave `release_conn` constant throughout the function. That way, if [e2e-predictor] # the function recurses, the original value of `release_conn` will be [e2e-predictor] # passed down into the recursive call, and its value will be respected. [e2e-predictor] # [e2e-predictor] # See issue #651 [1] for details. [e2e-predictor] # [e2e-predictor] # [1] [e2e-predictor] release_this_conn = release_conn [e2e-predictor] [e2e-predictor] http_tunnel_required = connection_requires_http_tunnel( [e2e-predictor] self.proxy, self.proxy_config, destination_scheme [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Merge the proxy headers. Only done when not using HTTP CONNECT. We [e2e-predictor] # have to copy the headers dict so we can safely change it without those [e2e-predictor] # changes being reflected in anyone else's copy. [e2e-predictor] if not http_tunnel_required: [e2e-predictor] headers = headers.copy() # type: ignore[attr-defined] [e2e-predictor] headers.update(self.proxy_headers) # type: ignore[union-attr] [e2e-predictor] [e2e-predictor] # Must keep the exception bound to a separate variable or else Python 3 [e2e-predictor] # complains about UnboundLocalError. [e2e-predictor] err = None [e2e-predictor] [e2e-predictor] # Keep track of whether we cleanly exited the except block. This [e2e-predictor] # ensures we do proper cleanup in finally. [e2e-predictor] clean_exit = False [e2e-predictor] [e2e-predictor] # Rewind body position, if needed. Record current position [e2e-predictor] # for future rewinds in the event of a redirect/retry. [e2e-predictor] body_pos = set_file_position(body, body_pos) [e2e-predictor] [e2e-predictor] try: [e2e-predictor] # Request a connection from the queue. [e2e-predictor] timeout_obj = self._get_timeout(timeout) [e2e-predictor] conn = self._get_conn(timeout=pool_timeout) [e2e-predictor] [e2e-predictor] conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] [e2e-predictor] [e2e-predictor] # Is this a closed/new connection that requires CONNECT tunnelling? [e2e-predictor] if self.proxy is not None and http_tunnel_required and conn.is_closed: [e2e-predictor] try: [e2e-predictor] self._prepare_proxy(conn) [e2e-predictor] except (BaseSSLError, OSError, SocketTimeout) as e: [e2e-predictor] self._raise_timeout( [e2e-predictor] err=e, url=self.proxy.url, timeout_value=conn.timeout [e2e-predictor] ) [e2e-predictor] raise [e2e-predictor] [e2e-predictor] # If we're going to release the connection in ``finally:``, then [e2e-predictor] # the response doesn't need to know about the connection. Otherwise [e2e-predictor] # it will also try to release it and we'll have a double-release [e2e-predictor] # mess. [e2e-predictor] response_conn = conn if not release_conn else None [e2e-predictor] [e2e-predictor] # Make the request on the HTTPConnection object [e2e-predictor] response = self._make_request( [e2e-predictor] conn, [e2e-predictor] method, [e2e-predictor] url, [e2e-predictor] timeout=timeout_obj, [e2e-predictor] body=body, [e2e-predictor] headers=headers, [e2e-predictor] chunked=chunked, [e2e-predictor] retries=retries, [e2e-predictor] response_conn=response_conn, [e2e-predictor] preload_content=preload_content, [e2e-predictor] decode_content=decode_content, [e2e-predictor] **response_kw, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] # Everything went great! [e2e-predictor] clean_exit = True [e2e-predictor] [e2e-predictor] except EmptyPoolError: [e2e-predictor] # Didn't get a connection from the pool, no need to clean up [e2e-predictor] clean_exit = True [e2e-predictor] release_this_conn = False [e2e-predictor] raise [e2e-predictor] [e2e-predictor] except ( [e2e-predictor] TimeoutError, [e2e-predictor] HTTPException, [e2e-predictor] OSError, [e2e-predictor] ProtocolError, [e2e-predictor] BaseSSLError, [e2e-predictor] SSLError, [e2e-predictor] CertificateError, [e2e-predictor] ProxyError, [e2e-predictor] ) as e: [e2e-predictor] # Discard the connection for these exceptions. It will be [e2e-predictor] # replaced during the next _get_conn() call. [e2e-predictor] clean_exit = False [e2e-predictor] new_e: Exception = e [e2e-predictor] if isinstance(e, (BaseSSLError, CertificateError)): [e2e-predictor] new_e = SSLError(e) [e2e-predictor] if isinstance( [e2e-predictor] new_e, [e2e-predictor] ( [e2e-predictor] OSError, [e2e-predictor] NewConnectionError, [e2e-predictor] TimeoutError, [e2e-predictor] SSLError, [e2e-predictor] HTTPException, [e2e-predictor] ), [e2e-predictor] ) and (conn and conn.proxy and not conn.has_connected_to_proxy): [e2e-predictor] new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) [e2e-predictor] elif isinstance(new_e, (OSError, HTTPException)): [e2e-predictor] new_e = ProtocolError("Connection aborted.", new_e) [e2e-predictor] [e2e-predictor] > retries = retries.increment( [e2e-predictor] method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] [e2e-predictor] ) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/connectionpool.py:841: [e2e-predictor] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-predictor] [e2e-predictor] self = Retry(total=0, connect=None, read=None, redirect=None, status=None) [e2e-predictor] method = 'PATCH' [e2e-predictor] url = '/api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config' [e2e-predictor] response = None [e2e-predictor] error = NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.c...a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)") [e2e-predictor] _pool = [e2e-predictor] _stacktrace = [e2e-predictor] [e2e-predictor] def increment( [e2e-predictor] self, [e2e-predictor] method: str | None = None, [e2e-predictor] url: str | None = None, [e2e-predictor] response: BaseHTTPResponse | None = None, [e2e-predictor] error: Exception | None = None, [e2e-predictor] _pool: ConnectionPool | None = None, [e2e-predictor] _stacktrace: TracebackType | None = None, [e2e-predictor] ) -> Self: [e2e-predictor] """Return a new Retry object with incremented retry counters. [e2e-predictor] [e2e-predictor] :param response: A response object, or None, if the server did not [e2e-predictor] return a response. [e2e-predictor] :type response: :class:`~urllib3.response.BaseHTTPResponse` [e2e-predictor] :param Exception error: An error encountered during the request, or [e2e-predictor] None if the response was received successfully. [e2e-predictor] [e2e-predictor] :return: A new ``Retry`` object. [e2e-predictor] """ [e2e-predictor] if self.total is False and error: [e2e-predictor] # Disabled, indicate to re-raise the error. [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] [e2e-predictor] total = self.total [e2e-predictor] if total is not None: [e2e-predictor] total -= 1 [e2e-predictor] [e2e-predictor] connect = self.connect [e2e-predictor] read = self.read [e2e-predictor] redirect = self.redirect [e2e-predictor] status_count = self.status [e2e-predictor] other = self.other [e2e-predictor] cause = "unknown" [e2e-predictor] status = None [e2e-predictor] redirect_location = None [e2e-predictor] [e2e-predictor] if error and self._is_connection_error(error): [e2e-predictor] # Connect retry? [e2e-predictor] if connect is False: [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif connect is not None: [e2e-predictor] connect -= 1 [e2e-predictor] [e2e-predictor] elif error and self._is_read_error(error): [e2e-predictor] # Read retry? [e2e-predictor] if read is False or method is None or not self._is_method_retryable(method): [e2e-predictor] raise reraise(type(error), error, _stacktrace) [e2e-predictor] elif read is not None: [e2e-predictor] read -= 1 [e2e-predictor] [e2e-predictor] elif error: [e2e-predictor] # Other retry? [e2e-predictor] if other is not None: [e2e-predictor] other -= 1 [e2e-predictor] [e2e-predictor] elif response and response.get_redirect_location(): [e2e-predictor] # Redirect retry? [e2e-predictor] if redirect is not None: [e2e-predictor] redirect -= 1 [e2e-predictor] cause = "too many redirects" [e2e-predictor] response_redirect_location = response.get_redirect_location() [e2e-predictor] if response_redirect_location: [e2e-predictor] redirect_location = response_redirect_location [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] else: [e2e-predictor] # Incrementing because of a server error like a 500 in [e2e-predictor] # status_forcelist and the given method is in the allowed_methods [e2e-predictor] cause = ResponseError.GENERIC_ERROR [e2e-predictor] if response and response.status: [e2e-predictor] if status_count is not None: [e2e-predictor] status_count -= 1 [e2e-predictor] cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) [e2e-predictor] status = response.status [e2e-predictor] [e2e-predictor] history = self.history + ( [e2e-predictor] RequestHistory(method, url, error, status, redirect_location), [e2e-predictor] ) [e2e-predictor] [e2e-predictor] new_retry = self.new( [e2e-predictor] total=total, [e2e-predictor] connect=connect, [e2e-predictor] read=read, [e2e-predictor] redirect=redirect, [e2e-predictor] status=status_count, [e2e-predictor] other=other, [e2e-predictor] history=history, [e2e-predictor] ) [e2e-predictor] [e2e-predictor] if new_retry.is_exhausted(): [e2e-predictor] reason = error or ResponseError(cause) [e2e-predictor] > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] [e2e-predictor] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Max retries exceeded with url: /api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config (Caused by NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")) [e2e-predictor] [e2e-predictor] ../../python/kserve/.venv/lib64/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError [e2e-predictor] ------------------------------ Captured log call ------------------------------- [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config [e2e-predictor] WARNING urllib3.connectionpool:connectionpool.py:868 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("HTTPSConnection(host='a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com', port=6443): Failed to resolve 'a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com' ([Errno -2] Name or service not known)")': /api/v1/namespaces/kserve-ci-e2e-test/secrets/storage-config [e2e-predictor] =============================== warnings summary =============================== [e2e-predictor] llmisvc/test_llm_inference_service.py:151 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:151: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] pytest.mark.custom_gateway, [e2e-predictor] [e2e-predictor] llmisvc/test_llm_inference_service.py:200 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:200: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] pytest.mark.custom_gateway, [e2e-predictor] [e2e-predictor] llmisvc/test_llm_inference_service.py:252 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:252: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] pytest.mark.custom_gateway, [e2e-predictor] [e2e-predictor] llmisvc/test_llm_inference_service.py:299 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:299: PytestUnknownMarkWarning: Unknown pytest.mark.cluster_gpu - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] pytest.mark.cluster_gpu, [e2e-predictor] [e2e-predictor] llmisvc/test_llm_inference_service.py:316 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:316: PytestUnknownMarkWarning: Unknown pytest.mark.no_scheduler - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] pytest.mark.no_scheduler, [e2e-predictor] [e2e-predictor] llmisvc/test_llm_inference_service.py:329 [e2e-predictor] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:329: PytestUnknownMarkWarning: Unknown pytest.mark.cluster_multi_node - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-predictor] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_multi_node], [e2e-predictor] [e2e-predictor] -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [e2e-predictor] =========================== short test summary info ============================ [e2e-predictor] FAILED batcher/test_batcher.py::test_batcher - kubernetes.client.exceptions.A... [e2e-predictor] FAILED batcher/test_batcher_custom_port.py::test_batcher_custom_port - kubern... [e2e-predictor] FAILED logger/test_logger.py::test_kserve_logger - RuntimeError: Timeout to s... [e2e-predictor] FAILED predictor/test_lightgbm.py::test_lightgbm_kserve - RuntimeError: Timeo... [e2e-predictor] FAILED predictor/test_lightgbm.py::test_lightgbm_runtime_kserve - RuntimeErro... [e2e-predictor] FAILED predictor/test_lightgbm.py::test_lightgbm_v2_runtime_mlserver - Runtim... [e2e-predictor] FAILED predictor/test_lightgbm.py::test_lightgbm_v2_kserve - RuntimeError: Ti... [e2e-predictor] FAILED predictor/test_mlflow.py::test_mlflow_v2_runtime_kserve - RuntimeError... [e2e-predictor] FAILED predictor/test_multi_container_probing.py::test_multi_container_probing [e2e-predictor] FAILED predictor/test_paddle.py::test_paddle - RuntimeError: Timeout to start... [e2e-predictor] FAILED predictor/test_paddle.py::test_paddle_runtime - urllib3.exceptions.Max... [e2e-predictor] FAILED predictor/test_paddle.py::test_paddle_v2_kserve - urllib3.exceptions.M... [e2e-predictor] FAILED predictor/test_pmml.py::test_pmml_kserve - urllib3.exceptions.MaxRetry... [e2e-predictor] FAILED predictor/test_pmml.py::test_pmml_runtime_kserve - urllib3.exceptions.... [e2e-predictor] FAILED predictor/test_pmml.py::test_pmml_v2_kserve - urllib3.exceptions.MaxRe... [e2e-predictor] FAILED predictor/test_pod_watch.py::test_event_storm_prevention_init_container_isolation [e2e-predictor] FAILED predictor/test_pod_watch.py::test_quick_reconciliation_on_init_container_failure [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_sklearn_v1 - urllib3.exc... [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_xgboost_v1 - urllib3.exc... [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_lightgbm_v1 - urllib3.ex... [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_sklearn_v2 - urllib3.exc... [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_xgboost_v2 - urllib3.exc... [e2e-predictor] FAILED predictor/test_predictive.py::test_predictive_lightgbm_v2 - urllib3.ex... [e2e-predictor] FAILED predictor/test_scheduler_name.py::test_scheduler_name - urllib3.except... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_kserve - urllib3.exceptions.Ma... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_v2_mlserver - urllib3.exceptio... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_runtime_kserve - urllib3.excep... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_v2_runtime_mlserver - urllib3.... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_v2 - urllib3.exceptions.MaxRet... [e2e-predictor] FAILED predictor/test_sklearn.py::test_sklearn_v2_mixed - urllib3.exceptions.... [e2e-predictor] FAILED predictor/test_tensorflow.py::test_tensorflow_kserve - urllib3.excepti... [e2e-predictor] FAILED predictor/test_tensorflow.py::test_tensorflow_runtime_kserve - urllib3... [e2e-predictor] FAILED predictor/test_triton.py::test_triton - urllib3.exceptions.MaxRetryErr... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_kserve - urllib3.exceptions.Ma... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_v2_mlserver - urllib3.exceptio... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_single_model_file - urllib3.ex... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_runtime_kserve - urllib3.excep... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_v2_runtime_mlserver - urllib3.... [e2e-predictor] FAILED predictor/test_xgboost.py::test_xgboost_v2 - urllib3.exceptions.MaxRet... [e2e-predictor] FAILED storagespec/test_s3_storagespec.py::test_sklearn_s3_storagespec_kserve [e2e-predictor] FAILED storagespec/test_s3_tls_storagespec.py::test_s3_tls_serving_cert_storagespec_kserve [e2e-predictor] ERROR storagespec/test_s3_tls_storagespec.py::test_s3_tls_global_custom_cert_storagespec_kserve [e2e-predictor] ERROR storagespec/test_s3_tls_storagespec.py::test_s3_tls_custom_cert_storagespec_kserve [e2e-predictor] ====== 41 failed, 15 skipped, 6 warnings, 2 errors in 6798.19s (1:53:18) ======= [must-gather] [must-gather ] OUT 2026-04-22T20:40:27.159729145Z Using must-gather plug-in image: quay.io/modh/must-gather:rhoai-2.24 [must-gather] When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: [must-gather] error getting cluster version: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] ClusterID: [must-gather] ClientVersion: 4.21.5 [must-gather] ClusterVersion: Installing "" for : [must-gather] error getting cluster operators: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] ClusterOperators: [must-gather] clusteroperators are missing [must-gather] [must-gather] [must-gather] [must-gather] [must-gather] Error running must-gather collection: [must-gather] creating temp namespace: Post "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] [must-gather] Falling back to `oc adm inspect clusterversion.v1.config.openshift.io,clusteroperators.v1.config.openshift.io` to collect basic cluster types. [must-gather] E0422 20:40:27.185622 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.190151 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.199042 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.205867 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.212106 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.218354 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.224163 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.228767 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] error completing cluster type inspection: error running backup collection: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] Falling back to `oc adm inspect namespace/openshift-cluster-version` to collect basic cluster named resources. [must-gather] E0422 20:40:27.235534 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.241673 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.246614 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] E0422 20:40:27.251773 23 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s\": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host" [must-gather] error completing cluster named resource inspection: error running backup collection: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api?timeout=32s": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] [must-gather] [must-gather] Reprinting Cluster State: [must-gather] When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: [must-gather] error getting cluster version: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] ClusterID: [must-gather] ClientVersion: 4.21.5 [must-gather] ClusterVersion: Installing "" for : [must-gather] error getting cluster operators: Get "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [must-gather] ClusterOperators: [must-gather] clusteroperators are missing [must-gather] [must-gather] [must-gather] error: creating temp namespace: Post "https://a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com:6443/api/v1/namespaces": dial tcp: lookup a26447624add44c5ea85e8f759399a3a-fed6f05dddc983fc.elb.us-east-1.amazonaws.com on 172.30.0.10:53: no such host [git-push-artifacts] WORK_DIR: /workspace/odh-ci-artifacts [git-push-artifacts] REPO_PATH: opendatahub-io/odh-build-metadata [git-push-artifacts] REPO_BRANCH: ci-artifacts [git-push-artifacts] SPARSE_FILE_PATH: test-artifacts/docs [git-push-artifacts] SOURCE_PATH: /workspace/artifacts-dir [git-push-artifacts] DEST_PATH: test-artifacts/kserve-group-test-kw2kn [git-push-artifacts] ALWAYS_PASS: false [git-push-artifacts] configuring gh token [git-push-artifacts] taking github token from Konflux bot [git-push-artifacts] Initialized empty Git repository in /workspace/odh-ci-artifacts/.git/ [git-push-artifacts] Using partial fetch with sparse checkout for: test-artifacts/docs [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] * [new branch] ci-artifacts -> origin/ci-artifacts [git-push-artifacts] Already on 'ci-artifacts' [git-push-artifacts] branch 'ci-artifacts' set up to track 'origin/ci-artifacts'. [git-push-artifacts] TASK_NAME=kserve-group-test-kw2kn-e2e-predictor [git-push-artifacts] PIPELINERUN_NAME=kserve-group-test-kw2kn [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] Already up to date. [git-push-artifacts] -rw-r--r--. 1 root 1001540000 838 Apr 22 20:40 /workspace/odh-ci-artifacts/test-artifacts/kserve-group-test-kw2kn/e2e-predictor.tar.gz [git-push-artifacts] [ci-artifacts acece40] Updating CI Artifacts in e2e-predictor [git-push-artifacts] 1 file changed, 0 insertions(+), 0 deletions(-) [git-push-artifacts] create mode 100644 test-artifacts/kserve-group-test-kw2kn/e2e-predictor.tar.gz [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] Already up to date. [git-push-artifacts] To https://github.com/opendatahub-io/odh-build-metadata.git [git-push-artifacts] a937eae..acece40 ci-artifacts -> ci-artifacts [fail-if-needed] Failing pipeline because deploy-and-e2e step failed container step-fail-if-needed has failed : [{"key":"StartedAt","value":"2026-04-22T20:40:29.988Z","type":3}]