Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-prnmq to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-zxg86

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-zxg86 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-bck66

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-bck66 to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-npgqt

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-npgqt to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-4z5rt to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-c66zd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-c66zd to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-lsk5m to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-7nd6v

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-7nd6v to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-69whm

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-69whm to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-f9cd8f646-pq5t5 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-bp5rd to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-b6dd86fb7-dtt45 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-5747956474-9w8pq to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-7b58d995d4-cs6zh

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-7b58d995d4-cs6zh to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66 to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-544466b4d7-b7qtx to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-n6ht9 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-74744fb9f-pkn5z

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-74744fb9f-pkn5z to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp to ip-10-0-132-159.ec2.internal

kserve-ci-e2e-test

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-d8276f-predictor-5555dbfb49-pz2td to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98 to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj to ip-10-0-134-248.ec2.internal

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-6995d6bbb4 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-batcher-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-6995d6bbb4

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

AddedInterface

Add eth0 [10.134.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" in 3.894s (3.894s including waiting). Image size: 299849144 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" in 12.435s (12.435s including waiting). Image size: 1560926130 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.406s (2.406s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:e6428204baa077dfe80627f4a0f194329877d007e326693ea30a582520538a96"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:e6428204baa077dfe80627f4a0f194329877d007e326693ea30a582520538a96" in 2.42s (2.42s including waiting). Image size: 238045275 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Started

Started container agent
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Killing

Stopping container agent

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-7f8556cf45

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-7f8556cf45 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

AddedInterface

Add eth0 [10.134.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:e6428204baa077dfe80627f4a0f194329877d007e326693ea30a582520538a96" already present on machine
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Unhealthy

Readiness probe failed: Get "https://10.134.0.31:8643/healthz": dial tcp 10.134.0.31:8643: connect: connection refused
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Unhealthy

Readiness probe failed: dial tcp 10.134.0.31:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6995d6bbb4-lj24v

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-npgqt

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "message-dumper-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

UpdateFailed

Failed to update status for InferenceService "message-dumper": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-npgqt

AddedInterface

Add eth0 [10.134.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 1.049s (1.049s including waiting). Image size: 14813193 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Started

Started container kube-rbac-proxy
(x7)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Unhealthy

Readiness probe failed: dial tcp 10.134.0.32:5000: connect: connection refused
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Unhealthy

Readiness probe failed: Get "https://10.134.0.32:8643/healthz": dial tcp 10.134.0.32:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7f8556cf45-nnkcj

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-74744fb9f

SuccessfulCreate

Created pod: isvc-logger-predictor-74744fb9f-pkn5z

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

UpdateFailed

Failed to update status for InferenceService "isvc-logger": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-74744fb9f from 0 to 1

kserve-ci-e2e-test

multus

isvc-logger-predictor-74744fb9f-pkn5z

AddedInterface

Add eth0 [10.134.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:e6428204baa077dfe80627f4a0f194329877d007e326693ea30a582520538a96" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-npgqt

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-c66zd

AddedInterface

Add eth0 [10.134.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-c66zd

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Pulling

Pulling image "kserve/lgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 6.15s (6.15s including waiting). Image size: 606297871 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Started

Started container kube-rbac-proxy
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Unhealthy

Readiness probe failed: dial tcp 10.134.0.34:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x5)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-74744fb9f-pkn5z

Unhealthy

Readiness probe failed: Get "https://10.134.0.34:8643/healthz": dial tcp 10.134.0.34:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x9)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Unhealthy

Readiness probe failed: dial tcp 10.134.0.35:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

AddedInterface

Add eth0 [10.134.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

Unhealthy

Readiness probe failed: Get "https://10.134.0.35:8643/healthz": dial tcp 10.134.0.35:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Started

Started container kserve-container
(x4)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c66zd

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

AddedInterface

Add eth0 [10.134.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Unhealthy

Readiness probe failed: Get "https://10.134.0.36:8643/healthz": dial tcp 10.134.0.36:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Started

Started container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-km8t6

Unhealthy

Readiness probe failed: dial tcp 10.134.0.36:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m9.871s (2m9.871s including waiting). Image size: 10890461297 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Started

Started container kube-rbac-proxy
(x10)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

AddedInterface

Add eth0 [10.134.0.38/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-49tbq

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Pulled

Container image "kserve/lgbserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.38:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-6c8zp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

AddedInterface

Add eth0 [10.134.0.39/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-mcp-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-mcp": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-544466b4d7 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-544466b4d7

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-cbq98

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

AddedInterface

Add eth0 [10.134.0.40/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 2.496s (2.496s including waiting). Image size: 237801512 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Started

Started container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Started

Started container kube-rbac-proxy
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

UpdateFailed

Failed to update status for InferenceService "isvc-paddle": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-n6ht9

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

AddedInterface

Add eth0 [10.134.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.771s (10.771s including waiting). Image size: 1162830075 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Unhealthy

Readiness probe failed: Get "http://10.134.0.40:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.134.0.40:8080: connect: connection refused
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-544466b4d7-b7qtx

Unhealthy

Readiness probe failed: Get "https://10.134.0.40:8643/healthz": dial tcp 10.134.0.40:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

AddedInterface

Add eth0 [10.134.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Unhealthy

Readiness probe failed: Get "https://10.134.0.41:8643/healthz": dial tcp 10.134.0.41:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-n6ht9

Unhealthy

Readiness probe failed: dial tcp 10.134.0.41:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Started

Started container kube-rbac-proxy
(x6)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Unhealthy

Readiness probe failed: dial tcp 10.134.0.42:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

AddedInterface

Add eth0 [10.134.0.43/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Unhealthy

Readiness probe failed: Get "https://10.134.0.42:8643/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-gbvvw

Unhealthy

Readiness probe failed: dial tcp 10.134.0.42:8080: i/o timeout

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-zxg86

AddedInterface

Add eth0 [10.134.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-zxg86

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

UpdateFailed

Failed to update status for InferenceService "isvc-pmml": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Unhealthy

Readiness probe failed: Get "https://10.134.0.43:8643/healthz": dial tcp 10.134.0.43:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-bnccp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.43:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.814s (6.814s including waiting). Image size: 800927094 bytes.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Unhealthy

Readiness probe failed: dial tcp 10.134.0.44:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-pmml-runtime-predictor-serving-cert" not found
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-bck66

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e"

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-bck66

AddedInterface

Add eth0 [10.133.0.21/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-zxg86

Unhealthy

Readiness probe failed: Get "https://10.134.0.44:8643/healthz": dial tcp 10.134.0.44:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" in 3.79s (3.79s including waiting). Image size: 299849144 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.755s (6.755s including waiting). Image size: 800927094 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.296s (2.296s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Unhealthy

Readiness probe failed: dial tcp 10.133.0.21:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

AddedInterface

Add eth0 [10.133.0.22/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-bck66

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-primary-d8276f-predictor-5555dbfb49

SuccessfulCreate

Created pod: isvc-primary-d8276f-predictor-5555dbfb49-pz2td

kserve-ci-e2e-test

deployment-controller

isvc-primary-d8276f-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-d8276f-predictor-5555dbfb49 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-d8276f

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-d8276f": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

AddedInterface

Add eth0 [10.134.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-d8276f

UpdateFailed

Failed to update status for InferenceService "isvc-primary-d8276f": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-d8276f": the object has been modified; please apply your changes to the latest version and try again
(x11)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Unhealthy

Readiness probe failed: dial tcp 10.133.0.22:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-l9zxs

Unhealthy

Readiness probe failed: Get "https://10.133.0.22:8643/healthz": dial tcp 10.133.0.22:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-d8276f-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-d8276f-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x7)

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Unhealthy

Readiness probe failed: dial tcp 10.134.0.45:8080: connect: connection refused
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-d8276f

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-d8276f

InferenceServiceReady

InferenceService [isvc-primary-d8276f] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-d8276f

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-d8276f": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-d8276f

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-d8276f": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-d8276f": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-secondary-d8276f-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-secondary-d8276f-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-d8276f-predictor-85d5b564f4 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-d8276f-predictor-85d5b564f4

SuccessfulCreate

Created pod: isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

kserve-ci-e2e-test

multus

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

AddedInterface

Add eth0 [10.134.0.46/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

Created

Created container: storage-initializer
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-d8276f-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-d8276f-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-d8276f-predictor-85d5b564f4-rz7x7_kserve-ci-e2e-test(99ea2f55-4922-492d-a1dc-262944b17b3a)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-d8276f

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-d8276f-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-d8276f-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-30f3f9-predictor-8694459b44

SuccessfulCreate

Created pod: isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-30f3f9

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-30f3f9": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-30f3f9

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-30f3f9": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-30f3f9": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-30f3f9-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-30f3f9-predictor-8694459b44 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-init-fail-30f3f9-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

AddedInterface

Add eth0 [10.134.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-d8276f-predictor-5555dbfb49-pz2td

Unhealthy

Readiness probe failed: Get "https://10.134.0.45:8643/healthz": dial tcp 10.134.0.45:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj

BackOff

Back-off restarting failed container storage-initializer in pod isvc-init-fail-30f3f9-predictor-8694459b44-4f5lj_kserve-ci-e2e-test(dc77c860-8b3d-4516-8561-7dd8ffe42bbe)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-30f3f9

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

AddedInterface

Add eth0 [10.133.0.23/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 21.045s (21.045s including waiting). Image size: 2324227435 bytes.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

AddedInterface

Add eth0 [10.134.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Unhealthy

Readiness probe failed: Get "https://10.133.0.23:8643/healthz": dial tcp 10.133.0.23:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-x5cmn

Unhealthy

Readiness probe failed: dial tcp 10.133.0.23:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 17.911s (17.911s including waiting). Image size: 2324227435 bytes.

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Unhealthy

Readiness probe failed: dial tcp 10.134.0.48:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-fg9nf

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

AddedInterface

Add eth0 [10.133.0.24/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

AddedInterface

Add eth0 [10.134.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Unhealthy

Readiness probe failed: Get "https://10.133.0.24:8643/healthz": dial tcp 10.133.0.24:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-rfd5k

Unhealthy

Readiness probe failed: dial tcp 10.133.0.24:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

AddedInterface

Add eth0 [10.133.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Started

Started container storage-initializer
(x4)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Unhealthy

Readiness probe failed: Get "http://10.134.0.49:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.134.0.49:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-bh7xd

Unhealthy

Readiness probe failed: Get "https://10.134.0.49:8643/healthz": dial tcp 10.134.0.49:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-lightgbm-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

AddedInterface

Add eth0 [10.133.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Started

Started container storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Unhealthy

Readiness probe failed: Get "http://10.133.0.25:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.133.0.25:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-hwdzp

Unhealthy

Readiness probe failed: Get "https://10.133.0.25:8643/healthz": dial tcp 10.133.0.25:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-58b748b64b

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-58b748b64b-9m7sx

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-58b748b64b from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-scheduler": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Killing

Stopping container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x6)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-7b58d995d4 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Unhealthy

Readiness probe failed: Get "https://10.133.0.26:8643/healthz": dial tcp 10.133.0.26:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-7b58d995d4

SuccessfulCreate

Created pod: isvc-sklearn-predictor-7b58d995d4-cs6zh

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-7b58d995d4-cs6zh

AddedInterface

Add eth0 [10.134.0.50/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Created

Created container: storage-initializer
(x6)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-hqw66

Unhealthy

Readiness probe failed: Get "http://10.133.0.26:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.133.0.26:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-prnmq

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "sklearn-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "sklearn-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "sklearn-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-prnmq

AddedInterface

Add eth0 [10.134.0.51/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Started

Started container storage-initializer
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Unhealthy

Readiness probe failed: dial tcp 10.134.0.50:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-7b58d995d4-cs6zh

Unhealthy

Readiness probe failed: Get "https://10.134.0.50:8643/healthz": dial tcp 10.134.0.50:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-5747956474 from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-5747956474-9w8pq

AddedInterface

Add eth0 [10.134.0.52/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-5747956474

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-5747956474-9w8pq

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Unhealthy

Readiness probe failed: Get "https://10.134.0.51:8643/healthz": dial tcp 10.134.0.51:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-prnmq

Unhealthy

Readiness probe failed: Get "http://10.134.0.51:8080/v2/models/sklearn-v2-mlserver/ready": dial tcp 10.134.0.51:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Unhealthy

Readiness probe failed: dial tcp 10.134.0.52:8080: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-5747956474-9w8pq

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

AddedInterface

Add eth0 [10.134.0.53/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-f9cd8f646

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-f9cd8f646 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

AddedInterface

Add eth0 [10.134.0.54/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Unhealthy

Readiness probe failed: Get "https://10.134.0.53:8643/healthz": dial tcp 10.134.0.53:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-8rd8p

Unhealthy

Readiness probe failed: Get "http://10.134.0.53:8080/v2/models/isvc-sklearn-v2-runtime/ready": dial tcp 10.134.0.53:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-566cfc9859

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-mixed": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-mixed": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-566cfc9859 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

AddedInterface

Add eth0 [10.134.0.55/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Created

Created container: storage-initializer
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Unhealthy

Readiness probe failed: dial tcp 10.134.0.54:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-f9cd8f646-pq5t5

Unhealthy

Readiness probe failed: Get "https://10.134.0.54:8643/healthz": dial tcp 10.134.0.54:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-69whm

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-69whm

AddedInterface

Add eth0 [10.133.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Created

Created container: storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Killing

Stopping container kserve-container
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Unhealthy

Readiness probe failed: dial tcp 10.134.0.55:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-566cfc9859-kjnsv

Unhealthy

Readiness probe failed: Get "https://10.134.0.55:8643/healthz": dial tcp 10.134.0.55:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 3.832s (3.832s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Unhealthy

Readiness probe failed: dial tcp 10.133.0.27:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Started

Started container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

AddedInterface

Add eth0 [10.133.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Pulled

Container image "tensorflow/serving:2.6.2" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Unhealthy

Readiness probe failed: dial tcp 10.133.0.28:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x7)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-69whm

Unhealthy

Readiness probe failed: Get "https://10.133.0.27:8643/healthz": dial tcp 10.133.0.27:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-7nd6v

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Started

Started container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

UpdateFailed

Failed to update status for InferenceService "isvc-triton": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-triton": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-7nd6v

AddedInterface

Add eth0 [10.133.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-mgt2d

Unhealthy

Readiness probe failed: Get "https://10.133.0.28:8643/healthz": dial tcp 10.133.0.28:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m52.386s (1m52.386s including waiting). Image size: 12907074623 bytes.

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Started

Started container kserve-container
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Unhealthy

Readiness probe failed: dial tcp 10.133.0.29:8080: connect: connection refused
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-lsk5m

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-7nd6v

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-lsk5m

AddedInterface

Add eth0 [10.133.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 18.963s (18.963s including waiting). Image size: 1306417402 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

AddedInterface

Add eth0 [10.134.0.56/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Unhealthy

Readiness probe failed: Get "https://10.133.0.30:8643/healthz": dial tcp 10.133.0.30:8643: connect: connection refused

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-lsk5m

Unhealthy

Readiness probe failed: dial tcp 10.133.0.30:8080: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

AddedInterface

Add eth0 [10.134.0.57/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-9kzj8

Unhealthy

Readiness probe failed: Get "https://10.134.0.56:8643/healthz": dial tcp 10.134.0.56:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Created

Created container: kserve-container

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-4z5rt

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

AddedInterface

Add eth0 [10.133.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Unhealthy

Readiness probe failed: Get "http://10.134.0.57:8080/v2/models/xgboost-v2-mlserver/ready": dial tcp 10.134.0.57:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-bp5rd

Unhealthy

Readiness probe failed: Get "https://10.134.0.57:8643/healthz": dial tcp 10.134.0.57:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Pulled

Container image "kserve/xgbserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1
(x2)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

AddedInterface

Add eth0 [10.134.0.58/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Unhealthy

Readiness probe failed: Get "https://10.133.0.31:8643/healthz": dial tcp 10.133.0.31:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-4z5rt

Unhealthy

Readiness probe failed: dial tcp 10.133.0.31:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

AddedInterface

Add eth0 [10.133.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Unhealthy

Readiness probe failed: Get "https://10.134.0.58:8643/healthz": dial tcp 10.134.0.58:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-cksmg

Unhealthy

Readiness probe failed: Get "http://10.134.0.58:8080/v2/models/isvc-xgboost-v2-runtime/ready": dial tcp 10.134.0.58:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-b6dd86fb7 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-b6dd86fb7

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

AddedInterface

Add eth0 [10.134.0.59/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Started

Started container kserve-container
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.32:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4cjnw

Unhealthy

Readiness probe failed: Get "https://10.133.0.32:8643/healthz": dial tcp 10.133.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Unhealthy

Readiness probe failed: dial tcp 10.134.0.59:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-688df9b567 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Unhealthy

Readiness probe failed: Get "https://10.134.0.59:8643/healthz": dial tcp 10.134.0.59:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-b6dd86fb7-dtt45

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

AddedInterface

Add eth0 [10.134.0.60/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Unhealthy

Readiness probe failed: dial tcp 10.134.0.60:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-688df9b567-6kwng

Unhealthy

Readiness probe failed: Get "https://10.134.0.60:8643/healthz": dial tcp 10.134.0.60:8643: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

AddedInterface

Add eth0 [10.134.0.61/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

Created

Created container: storage-initializer
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-548d449c8f-rpfk2

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

AddedInterface

Add eth0 [10.134.0.62/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Unhealthy

Readiness probe failed: dial tcp 10.134.0.62:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Unhealthy

Readiness probe failed: Get "https://10.134.0.62:8643/healthz": dial tcp 10.134.0.62:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-78b57684f9-pzzw6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-custom-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

AddedInterface

Add eth0 [10.134.0.63/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

Created

Created container: storage-initializer
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-5cbcb57d85-66dfl

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

AddedInterface

Add eth0 [10.134.0.64/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1447" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Started

Started container kserve-container

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Unhealthy

Readiness probe failed: Get "https://10.134.0.64:8643/healthz": dial tcp 10.134.0.64:8643: connect: connection refused

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-55fcc5fc5d-b7hz2

Unhealthy

Readiness probe failed: dial tcp 10.134.0.64:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

AddedInterface

Add eth0 [10.134.0.65/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:85f85c16414082de64de782f602228652c10c5bace08b518e803a7e911c62e5e" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-serving-fail-predictor-79bd57f944-7rmnp_kserve-ci-e2e-test(b6fd4750-6049-4209-9853-78972fdd23ca)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.