Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9 to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-2zztw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-2zztw to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-77d500-predictor-7cbd677c59-wpc97 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-77d500-predictor-8d9ffc784-l4s82 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-64fcb8589f-wgmk5 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-ltxg9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-ltxg9 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-54b7w to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-ts72q to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-zzb69

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-zzb69 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9 to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-7d4db54646-dlmkf

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-7d4db54646-dlmkf to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-x8c6j

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-x8c6j to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-w5ldr to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-w8f78

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-w8f78 to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-jvtld

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-jvtld to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5 to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-wpjsw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-wpjsw to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-mw5mr to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6 to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-57fcff47c9-zrm52 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765 to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-584b446894-dhtlj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-584b446894-dhtlj to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-7c6499f57-hk2ls to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-6875c879b7-96mbk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-6875c879b7-96mbk to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz to ip-10-0-143-206.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-v4hvs to ip-10-0-139-128.ec2.internal

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-57fcff47c9

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

AddedInterface

Add eth0 [10.134.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f"

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-57fcff47c9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" in 4.542s (4.542s including waiting). Image size: 301485528 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Started

Started container storage-initializer
(x23)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" in 12.738s (12.738s including waiting). Image size: 1560922562 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:de59d4f440abaeb1e71b5977a2145cdbe8db88ded8ac16ca09f179d82ba41738"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.506s (2.506s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:de59d4f440abaeb1e71b5977a2145cdbe8db88ded8ac16ca09f179d82ba41738" in 2.912s (2.912s including waiting). Image size: 238051450 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Created

Created container: agent
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-7c59ff5d from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

AddedInterface

Add eth0 [10.134.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-7c59ff5d

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:de59d4f440abaeb1e71b5977a2145cdbe8db88ded8ac16ca09f179d82ba41738" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Started

Started container kube-rbac-proxy
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Unhealthy

Readiness probe failed: dial tcp 10.134.0.25:8080: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-57fcff47c9-zrm52

Unhealthy

Readiness probe failed: Get "https://10.134.0.25:8643/healthz": dial tcp 10.134.0.25:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "message-dumper-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-x8c6j

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Killing

Stopping container agent

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-x8c6j

AddedInterface

Add eth0 [10.134.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 1.225s (1.225s including waiting). Image size: 14813193 bytes.
(x25)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Unhealthy

Readiness probe failed: dial tcp 10.134.0.26:5000: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-7d4db54646 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-7d4db54646

SuccessfulCreate

Created pod: isvc-logger-predictor-7d4db54646-dlmkf

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Started

Started container storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7c59ff5d-hd5lz

Unhealthy

Readiness probe failed: Get "https://10.134.0.26:8643/healthz": dial tcp 10.134.0.26:8643: connect: connection refused

kserve-ci-e2e-test

multus

isvc-logger-predictor-7d4db54646-dlmkf

AddedInterface

Add eth0 [10.134.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:de59d4f440abaeb1e71b5977a2145cdbe8db88ded8ac16ca09f179d82ba41738" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Started

Started container agent
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-ltxg9

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-x8c6j

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-ltxg9

AddedInterface

Add eth0 [10.134.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Pulling

Pulling image "kserve/lgbserver:latest"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 7.033s (7.033s including waiting). Image size: 606096651 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Created

Created container: kube-rbac-proxy
(x11)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Unhealthy

Readiness probe failed: dial tcp 10.134.0.28:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x4)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7d4db54646-dlmkf

Unhealthy

Readiness probe failed: Get "https://10.134.0.28:8643/healthz": dial tcp 10.134.0.28:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Unhealthy

Readiness probe failed: dial tcp 10.134.0.29:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

AddedInterface

Add eth0 [10.134.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-ltxg9

Unhealthy

Readiness probe failed: Get "https://10.134.0.29:8643/healthz": dial tcp 10.134.0.29:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

AddedInterface

Add eth0 [10.134.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Unhealthy

Readiness probe failed: Get "https://10.134.0.30:8643/healthz": dial tcp 10.134.0.30:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-67zf9

Unhealthy

Readiness probe failed: dial tcp 10.134.0.30:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m24.657s (2m24.657s including waiting). Image size: 10890461297 bytes.
(x12)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-z2x9p

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

AddedInterface

Add eth0 [10.134.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Started

Started container kserve-container
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Unhealthy

Readiness probe failed: dial tcp 10.134.0.32:8080: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t4pcr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

AddedInterface

Add eth0 [10.134.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-5f8b5bfcd6 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-g52mx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-5f8b5bfcd6

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-mcp-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

AddedInterface

Add eth0 [10.134.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 2.893s (2.893s including waiting). Image size: 237817895 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Started

Started container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-ts72q

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-ts72q

AddedInterface

Add eth0 [10.134.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Pulling

Pulling image "kserve/paddleserver:latest"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.842s (10.842s including waiting). Image size: 1162587384 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Created

Created container: kube-rbac-proxy
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Unhealthy

Readiness probe failed: Get "https://10.134.0.34:8643/healthz": dial tcp 10.134.0.34:8643: connect: connection refused
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5f8b5bfcd6-nw5gq

Unhealthy

Readiness probe failed: Get "http://10.134.0.34:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.134.0.34:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

AddedInterface

Add eth0 [10.134.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Started

Started container storage-initializer
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Unhealthy

Readiness probe failed: dial tcp 10.134.0.35:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-ts72q

Unhealthy

Readiness probe failed: Get "https://10.134.0.35:8643/healthz": dial tcp 10.134.0.35:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

AddedInterface

Add eth0 [10.134.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Unhealthy

Readiness probe failed: Get "https://10.134.0.36:8643/healthz": dial tcp 10.134.0.36:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-9thsb

Unhealthy

Readiness probe failed: dial tcp 10.134.0.36:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-zzb69

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-zzb69

AddedInterface

Add eth0 [10.134.0.38/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Created

Created container: storage-initializer
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Unhealthy

Readiness probe failed: dial tcp 10.134.0.37:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-jphz7

Unhealthy

Readiness probe failed: Get "https://10.134.0.37:8643/healthz": dial tcp 10.134.0.37:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Pulling

Pulling image "kserve/pmmlserver:latest"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.88s (6.88s including waiting). Image size: 801068920 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Unhealthy

Readiness probe failed: dial tcp 10.134.0.38:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-2zztw

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zzb69

Unhealthy

Readiness probe failed: Get "https://10.134.0.38:8643/healthz": dial tcp 10.134.0.38:8643: connect: connection refused

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-2zztw

AddedInterface

Add eth0 [10.133.0.23/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" in 5.108s (5.108s including waiting). Image size: 301485528 bytes.
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 7.11s (7.11s including waiting). Image size: 801068920 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.408s (2.408s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Killing

Stopping container kube-rbac-proxy
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

AddedInterface

Add eth0 [10.133.0.24/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Unhealthy

Readiness probe failed: Get "https://10.133.0.23:8643/healthz": dial tcp 10.133.0.23:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-2zztw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.23:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Created

Created container: kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Unhealthy

Readiness probe failed: dial tcp 10.133.0.24:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-primary-77d500-predictor-8d9ffc784

SuccessfulCreate

Created pod: isvc-primary-77d500-predictor-8d9ffc784-l4s82

kserve-ci-e2e-test

deployment-controller

isvc-primary-77d500-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-77d500-predictor-8d9ffc784 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-primary-77d500-predictor-8d9ffc784-l4s82

AddedInterface

Add eth0 [10.134.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-cx7b9

Unhealthy

Readiness probe failed: Get "https://10.133.0.24:8643/healthz": dial tcp 10.133.0.24:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-77d500-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-77d500-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Unhealthy

Readiness probe failed: dial tcp 10.134.0.39:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-77d500

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-77d500

InferenceServiceReady

InferenceService [isvc-primary-77d500] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-77d500-predictor-7cbd677c59

SuccessfulCreate

Created pod: isvc-secondary-77d500-predictor-7cbd677c59-wpc97

kserve-ci-e2e-test

deployment-controller

isvc-secondary-77d500-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-77d500-predictor-7cbd677c59 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-77d500

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-77d500": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-77d500": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-77d500

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-77d500": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

AddedInterface

Add eth0 [10.134.0.40/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-secondary-77d500-predictor-7cbd677c59-wpc97

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-77d500-predictor-7cbd677c59-wpc97_kserve-ci-e2e-test(17d692a4-ac88-467f-97fe-4ec44b36dbb0)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-77d500-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-77d500-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-77d500

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-77d500-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-77d500-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-19fe47

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-19fe47": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-19fe47

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-19fe47": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-19fe47": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-19fe47-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-19fe47-predictor-c5688fb5c from 0 to 1

kserve-ci-e2e-test

multus

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

AddedInterface

Add eth0 [10.134.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-19fe47-predictor-c5688fb5c

SuccessfulCreate

Created pod: isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

kserve-ci-e2e-test

kubelet

isvc-primary-77d500-predictor-8d9ffc784-l4s82

Unhealthy

Readiness probe failed: Get "https://10.134.0.39:8643/healthz": dial tcp 10.134.0.39:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm

BackOff

Back-off restarting failed container storage-initializer in pod isvc-init-fail-19fe47-predictor-c5688fb5c-bh8pm_kserve-ci-e2e-test(42088543-26c0-4162-a75a-dfc5448d4af4)
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-19fe47

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

AddedInterface

Add eth0 [10.133.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Pulling

Pulling image "kserve/predictiveserver:latest"
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 23.906s (23.906s including waiting). Image size: 2325183853 bytes.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

AddedInterface

Add eth0 [10.133.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Unhealthy

Readiness probe failed: Get "https://10.133.0.25:8643/healthz": dial tcp 10.133.0.25:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-54b7w

Unhealthy

Readiness probe failed: dial tcp 10.133.0.25:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Started

Started container kube-rbac-proxy
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

AddedInterface

Add eth0 [10.133.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Killing

Stopping container kserve-container
(x11)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Unhealthy

Readiness probe failed: dial tcp 10.133.0.26:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6gsl

Unhealthy

Readiness probe failed: Get "https://10.133.0.26:8643/healthz": dial tcp 10.133.0.26:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Created

Created container: kserve-container
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Killing

Stopping container kserve-container
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

FailedCreate

Error creating: Internal error occurred: failed calling webhook "inferenceservice.kserve-webhook-server.pod-mutator": failed to call webhook: Post "https://kserve-webhook-server-service.kserve.svc:443/mutate-pods?timeout=10s": EOF

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Killing

Stopping container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

AddedInterface

Add eth0 [10.133.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Unhealthy

Readiness probe failed: Get "https://10.133.0.27:8643/healthz": dial tcp 10.133.0.27:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-8whhc

Unhealthy

Readiness probe failed: dial tcp 10.133.0.27:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

AddedInterface

Add eth0 [10.133.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Unhealthy

Readiness probe failed: Get "https://10.133.0.28:8643/healthz": dial tcp 10.133.0.28:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-5l4f9

Unhealthy

Readiness probe failed: Get "http://10.133.0.28:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.133.0.28:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Started

Started container kube-rbac-proxy
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

AddedInterface

Add eth0 [10.133.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Unhealthy

Readiness probe failed: Get "https://10.133.0.29:8643/healthz": dial tcp 10.133.0.29:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-svdg6

Unhealthy

Readiness probe failed: Get "http://10.133.0.29:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.133.0.29:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Created

Created container: kserve-container
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-cd9c59d7 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-cd9c59d7

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-cd9c59d7-9t72f
(x3)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-6875c879b7

SuccessfulCreate

Created pod: isvc-sklearn-predictor-6875c879b7-96mbk

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-6875c879b7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-6875c879b7-96mbk

AddedInterface

Add eth0 [10.134.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Started

Started container storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Unhealthy

Readiness probe failed: Get "http://10.133.0.30:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.133.0.30:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-ftgmz

Unhealthy

Readiness probe failed: Get "https://10.133.0.30:8643/healthz": dial tcp 10.133.0.30:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Started

Started container kube-rbac-proxy
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-mw5mr

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

AddedInterface

Add eth0 [10.134.0.43/23] from ovn-kubernetes
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Unhealthy

Readiness probe failed: dial tcp 10.134.0.42:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-6875c879b7-96mbk

Unhealthy

Readiness probe failed: Get "https://10.134.0.42:8643/healthz": dial tcp 10.134.0.42:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-7c6499f57

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-7c6499f57-hk2ls
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-7c6499f57 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

AddedInterface

Add eth0 [10.134.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-mw5mr

Unhealthy

Readiness probe failed: Get "https://10.134.0.43:8643/healthz": dial tcp 10.134.0.43:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Started

Started container kserve-container
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Unhealthy

Readiness probe failed: dial tcp 10.134.0.44:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7c6499f57-hk2ls

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

AddedInterface

Add eth0 [10.134.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-64fcb8589f from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Killing

Stopping container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

AddedInterface

Add eth0 [10.134.0.46/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-64fcb8589f

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Unhealthy

Readiness probe failed: Get "http://10.134.0.45:8080/v2/models/isvc-sklearn-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8ctdr

Unhealthy

Readiness probe failed: Get "https://10.134.0.45:8643/healthz": dial tcp 10.134.0.45:8643: connect: connection refused
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-86d7579fd6 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

AddedInterface

Add eth0 [10.134.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-86d7579fd6

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Created

Created container: storage-initializer
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Unhealthy

Readiness probe failed: dial tcp 10.134.0.46:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-64fcb8589f-wgmk5

Unhealthy

Readiness probe failed: Get "https://10.134.0.46:8643/healthz": dial tcp 10.134.0.46:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Created

Created container: kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-wpjsw

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-wpjsw

AddedInterface

Add eth0 [10.133.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Unhealthy

Readiness probe failed: Get "https://10.134.0.47:8643/healthz": dial tcp 10.134.0.47:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-86d7579fd6-87x9p

Unhealthy

Readiness probe failed: dial tcp 10.134.0.47:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Pulling

Pulling image "tensorflow/serving:2.6.2"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 4.455s (4.455s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.31:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

AddedInterface

Add eth0 [10.133.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Pulled

Container image "tensorflow/serving:2.6.2" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Unhealthy

Readiness probe failed: dial tcp 10.133.0.32:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-wpjsw

Unhealthy

Readiness probe failed: Get "https://10.133.0.31:8643/healthz": dial tcp 10.133.0.31:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-jvtld

AddedInterface

Add eth0 [10.133.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-jvtld

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-fh7l5

Unhealthy

Readiness probe failed: Get "https://10.133.0.32:8643/healthz": dial tcp 10.133.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m55.146s (1m55.146s including waiting). Image size: 12907074623 bytes.

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Created

Created container: kube-rbac-proxy
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Unhealthy

Readiness probe failed: dial tcp 10.133.0.33:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-w8f78

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-w8f78

AddedInterface

Add eth0 [10.133.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-jvtld

Unhealthy

Readiness probe failed: Get "https://10.133.0.33:8643/healthz": dial tcp 10.133.0.33:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Pulling

Pulling image "kserve/xgbserver:latest"
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 21.566s (21.566s including waiting). Image size: 1306329851 bytes.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

AddedInterface

Add eth0 [10.134.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Unhealthy

Readiness probe failed: Get "https://10.133.0.34:8643/healthz": dial tcp 10.133.0.34:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-w8f78

Unhealthy

Readiness probe failed: dial tcp 10.133.0.34:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Created

Created container: kserve-container
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

AddedInterface

Add eth0 [10.134.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-cq8m2

Unhealthy

Readiness probe failed: Get "https://10.134.0.48:8643/healthz": dial tcp 10.134.0.48:8643: connect: connection refused
(x24)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-w5ldr

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

AddedInterface

Add eth0 [10.133.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Pulled

Container image "kserve/xgbserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-v4hvs

Unhealthy

Readiness probe failed: Get "https://10.134.0.49:8643/healthz": dial tcp 10.134.0.49:8643: connect: connection refused
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

AddedInterface

Add eth0 [10.134.0.50/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Unhealthy

Readiness probe failed: Get "https://10.133.0.35:8643/healthz": dial tcp 10.133.0.35:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-w5ldr

Unhealthy

Readiness probe failed: dial tcp 10.133.0.35:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Started

Started container kube-rbac-proxy
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

AddedInterface

Add eth0 [10.133.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Unhealthy

Readiness probe failed: Get "http://10.134.0.50:8080/v2/models/isvc-xgboost-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-sp5z2

Unhealthy

Readiness probe failed: Get "https://10.134.0.50:8643/healthz": dial tcp 10.134.0.50:8643: connect: connection refused
(x24)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-584b446894 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-584b446894-dhtlj

AddedInterface

Add eth0 [10.134.0.51/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-584b446894

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-584b446894-dhtlj

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Unhealthy

Readiness probe failed: Get "https://10.133.0.36:8643/healthz": dial tcp 10.133.0.36:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Started

Started container kserve-container
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-k9kkk

Unhealthy

Readiness probe failed: dial tcp 10.133.0.36:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Unhealthy

Readiness probe failed: Get "https://10.134.0.51:8643/healthz": dial tcp 10.134.0.51:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Killing

Stopping container kube-rbac-proxy
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-584b446894-dhtlj

Unhealthy

Readiness probe failed: dial tcp 10.134.0.51:8080: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

AddedInterface

Add eth0 [10.134.0.52/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine
(x25)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Unhealthy

Readiness probe failed: dial tcp 10.134.0.52:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

AddedInterface

Add eth0 [10.134.0.53/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-764bc9bb86-5qmr4

Unhealthy

Readiness probe failed: Get "https://10.134.0.52:8643/healthz": dial tcp 10.134.0.52:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-global-fail-predictor-8cfdd8b8d-nx7ll_kserve-ci-e2e-test(0cec68ac-098a-4035-973d-4ae9e7f46cce)
(x23)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-custom-pass-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

AddedInterface

Add eth0 [10.134.0.54/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine
(x22)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Unhealthy

Readiness probe failed: dial tcp 10.134.0.54:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-848ff9798-tm765

Unhealthy

Readiness probe failed: Get "https://10.134.0.54:8643/healthz": dial tcp 10.134.0.54:8643: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-custom-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

AddedInterface

Add eth0 [10.134.0.55/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine
(x18)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-c7d959f56-zkxbc

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

AddedInterface

Add eth0 [10.134.0.56/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1404" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Created

Created container: kserve-container
(x22)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Unhealthy

Readiness probe failed: dial tcp 10.134.0.56:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

AddedInterface

Add eth0 [10.134.0.57/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-545f7c7957-66n4x

Unhealthy

Readiness probe failed: Get "https://10.134.0.56:8643/healthz": dial tcp 10.134.0.56:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:3460f014e7dc0a9d3daafe0716ca9eadf865f2892e0a5103d0b876da9f34891f" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

Created

Created container: storage-initializer
(x18)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-7b6b7fcbd7-77h4d

Killing

Stopping container storage-initializer