Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-65cd49579f-pgv28 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-nv54z to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-rztqz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-rztqz to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-8mtsr to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-96f18d-predictor-78476558f5-zszdd to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-hhjq9

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-hhjq9 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6 to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-c5x67

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-c5x67 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-swtpp to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-l8dqn to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-2fxfg

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-2fxfg to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-tbfcg

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-tbfcg to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-69755fbb9-94sg8 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-88457d696-jcz4m

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-88457d696-jcz4m to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-hk67v to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-2bnlb to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-64d54fcc88-5j2l7

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-64d54fcc88-5j2l7 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-d8dbfbbb9-xgzx7 to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-9efca4-predictor-897f6b668-4kf2v to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-secondary-9efca4-predictor-695c447fdc-c577g

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-9efca4-predictor-695c447fdc-c577g to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj to ip-10-0-128-217.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz to ip-10-0-141-140.ec2.internal

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-6c75bdff6f

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-6c75bdff6f from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

AddedInterface

Add eth0 [10.133.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-batcher-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" in 3.222s (3.222s including waiting). Image size: 301288360 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" in 13.618s (13.618s including waiting). Image size: 1560612266 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.16s (2.16s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:63b01855d4f4d9cc9a109698dbd7d7889d3c2d40a32577f5946d367ab2c8c321"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:63b01855d4f4d9cc9a109698dbd7d7889d3c2d40a32577f5946d367ab2c8c321" in 2.828s (2.828s including waiting). Image size: 237897293 bytes.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-ccbd696dd from 0 to 1
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-batcher-custom-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-ccbd696dd

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

AddedInterface

Add eth0 [10.133.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:63b01855d4f4d9cc9a109698dbd7d7889d3c2d40a32577f5946d367ab2c8c321" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Unhealthy

Readiness probe failed: Get "https://10.133.0.33:8643/healthz": dial tcp 10.133.0.33:8643: connect: connection refused
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Unhealthy

Readiness probe failed: dial tcp 10.133.0.33:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6c75bdff6f-vjpxz

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-hhjq9

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-hhjq9

AddedInterface

Add eth0 [10.133.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

UpdateFailed

Failed to update status for InferenceService "message-dumper": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 1.085s (1.085s including waiting). Image size: 14813193 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Created

Created container: kube-rbac-proxy
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready
(x8)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Unhealthy

Readiness probe failed: Get "https://10.133.0.34:8643/healthz": dial tcp 10.133.0.34:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-ccbd696dd-cxrqb

Unhealthy

Readiness probe failed: dial tcp 10.133.0.34:5000: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

UpdateFailed

Failed to update status for InferenceService "isvc-logger": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-logger-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-64d54fcc88

SuccessfulCreate

Created pod: isvc-logger-predictor-64d54fcc88-5j2l7

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-64d54fcc88 from 0 to 1

kserve-ci-e2e-test

multus

isvc-logger-predictor-64d54fcc88-5j2l7

AddedInterface

Add eth0 [10.133.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:63b01855d4f4d9cc9a109698dbd7d7889d3c2d40a32577f5946d367ab2c8c321" already present on machine

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Killing

Stopping container agent

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-c5x67

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-hhjq9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-c5x67

AddedInterface

Add eth0 [10.133.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Pulling

Pulling image "kserve/lgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 6.894s (6.894s including waiting). Image size: 606108943 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Created

Created container: kube-rbac-proxy
(x4)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Unhealthy

Readiness probe failed: Get "https://10.133.0.36:8643/healthz": dial tcp 10.133.0.36:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Unhealthy

Readiness probe failed: dial tcp 10.133.0.36:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-64d54fcc88-5j2l7

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Unhealthy

Readiness probe failed: dial tcp 10.133.0.37:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Killing

Stopping container kserve-container
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

AddedInterface

Add eth0 [10.133.0.38/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-c5x67

Unhealthy

Readiness probe failed: Get "https://10.133.0.37:8643/healthz": dial tcp 10.133.0.37:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Started

Started container kserve-container

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

AddedInterface

Add eth0 [10.133.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Unhealthy

Readiness probe failed: dial tcp 10.133.0.38:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-2vn4t

Unhealthy

Readiness probe failed: Get "https://10.133.0.38:8643/healthz": dial tcp 10.133.0.38:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m13.389s (2m13.389s including waiting). Image size: 10890461297 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x11)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-46kbf

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

AddedInterface

Add eth0 [10.133.0.40/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Unhealthy

Readiness probe failed: dial tcp 10.133.0.40:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-t9xws

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

AddedInterface

Add eth0 [10.133.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

AddedInterface

Add eth0 [10.133.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-mcp": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-5fdf4889b4 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-5fdf4889b4

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-znqb6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 2.474s (2.474s including waiting). Image size: 237663782 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Started

Started container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-2bnlb

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

UpdateFailed

Failed to update status for InferenceService "isvc-paddle": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

AddedInterface

Add eth0 [10.134.0.18/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" in 3.596s (3.596s including waiting). Image size: 301288360 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 11.142s (11.142s including waiting). Image size: 1162639611 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Unhealthy

Readiness probe failed: Get "http://10.133.0.42:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.133.0.42:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 3.074s (3.074s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Started

Started container kube-rbac-proxy
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-5fdf4889b4-5rclr

Unhealthy

Readiness probe failed: Get "https://10.133.0.42:8643/healthz": dial tcp 10.133.0.42:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Unhealthy

Readiness probe failed: Get "https://10.134.0.18:8643/healthz": dial tcp 10.134.0.18:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-2bnlb

Unhealthy

Readiness probe failed: dial tcp 10.134.0.18:8080: connect: connection refused

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

AddedInterface

Add eth0 [10.133.0.43/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.526s (10.526s including waiting). Image size: 1162639611 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Started

Started container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Unhealthy

Readiness probe failed: Get "https://10.133.0.43:8643/healthz": dial tcp 10.133.0.43:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Unhealthy

Readiness probe failed: dial tcp 10.133.0.43:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-75r7n

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

AddedInterface

Add eth0 [10.134.0.19/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-pmml-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-rztqz

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

UpdateFailed

Failed to update status for InferenceService "isvc-pmml": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-rztqz

AddedInterface

Add eth0 [10.134.0.20/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Started

Started container storage-initializer
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Unhealthy

Readiness probe failed: dial tcp 10.134.0.19:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-lvxwn

Unhealthy

Readiness probe failed: Get "https://10.134.0.19:8643/healthz": dial tcp 10.134.0.19:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.79s (6.79s including waiting). Image size: 800715639 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Unhealthy

Readiness probe failed: dial tcp 10.134.0.20:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-8mtsr

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-8mtsr

AddedInterface

Add eth0 [10.134.0.21/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-rztqz

Unhealthy

Readiness probe failed: Get "https://10.134.0.20:8643/healthz": dial tcp 10.134.0.20:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

AddedInterface

Add eth0 [10.134.0.22/23] from ovn-kubernetes
(x11)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Unhealthy

Readiness probe failed: dial tcp 10.134.0.21:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-8mtsr

Unhealthy

Readiness probe failed: Get "https://10.134.0.21:8643/healthz": dial tcp 10.134.0.21:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Unhealthy

Readiness probe failed: dial tcp 10.134.0.22:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-primary-9efca4-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-9efca4-predictor-897f6b668 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-9efca4

UpdateFailed

Failed to update status for InferenceService "isvc-primary-9efca4": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-9efca4": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-9efca4

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-9efca4": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-primary-9efca4-predictor-897f6b668

SuccessfulCreate

Created pod: isvc-primary-9efca4-predictor-897f6b668-4kf2v

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-primary-9efca4-predictor-897f6b668-4kf2v

AddedInterface

Add eth0 [10.133.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-6k6cn

Unhealthy

Readiness probe failed: Get "https://10.134.0.22:8643/healthz": dial tcp 10.134.0.22:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-9efca4-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-9efca4-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-9efca4

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-9efca4

InferenceServiceReady

InferenceService [isvc-primary-9efca4] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-9efca4

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-9efca4": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-9efca4

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-9efca4": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-9efca4": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-secondary-9efca4-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-9efca4-predictor-695c447fdc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-9efca4-predictor-695c447fdc

SuccessfulCreate

Created pod: isvc-secondary-9efca4-predictor-695c447fdc-c577g

kserve-ci-e2e-test

kubelet

isvc-secondary-9efca4-predictor-695c447fdc-c577g

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-secondary-9efca4-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-secondary-9efca4-predictor-695c447fdc-c577g

AddedInterface

Add eth0 [10.133.0.45/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-9efca4-predictor-695c447fdc-c577g

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-9efca4-predictor-695c447fdc-c577g

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-9efca4-predictor-695c447fdc-c577g

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-9efca4-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-9efca4-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-secondary-9efca4-predictor-695c447fdc-c577g

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-9efca4-predictor-695c447fdc-c577g_kserve-ci-e2e-test(6f2d9e57-cc5f-4136-80d6-3a53a66d1f79)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-9efca4

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-9efca4-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-9efca4-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-96f18d-predictor-78476558f5

SuccessfulCreate

Created pod: isvc-init-fail-96f18d-predictor-78476558f5-zszdd

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-96f18d-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-96f18d-predictor-78476558f5 from 0 to 1

kserve-ci-e2e-test

multus

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

AddedInterface

Add eth0 [10.133.0.46/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-96f18d

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-96f18d": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-96f18d

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-96f18d": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-96f18d": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Unhealthy

Readiness probe failed: Get "https://10.133.0.44:8643/healthz": dial tcp 10.133.0.44:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-primary-9efca4-predictor-897f6b668-4kf2v

Unhealthy

Readiness probe failed: dial tcp 10.133.0.44:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-init-fail-96f18d-predictor-78476558f5-zszdd

BackOff

Back-off restarting failed container storage-initializer in pod isvc-init-fail-96f18d-predictor-78476558f5-zszdd_kserve-ci-e2e-test(c21420d1-4323-4c3b-8aba-8f118edbe2b5)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-96f18d

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

AddedInterface

Add eth0 [10.134.0.23/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 22.121s (22.121s including waiting). Image size: 2312633199 bytes.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Started

Started container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Unhealthy

Readiness probe failed: dial tcp 10.134.0.23:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

AddedInterface

Add eth0 [10.134.0.24/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-pqm9n

Unhealthy

Readiness probe failed: Get "https://10.134.0.23:8643/healthz": dial tcp 10.134.0.23:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

AddedInterface

Add eth0 [10.134.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-lightgbm-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Unhealthy

Readiness probe failed: Get "https://10.134.0.24:8643/healthz": dial tcp 10.134.0.24:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-x6dvj

Unhealthy

Readiness probe failed: dial tcp 10.134.0.24:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.25:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

AddedInterface

Add eth0 [10.134.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-hnwmp

Unhealthy

Readiness probe failed: Get "https://10.134.0.25:8643/healthz": dial tcp 10.134.0.25:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Created

Created container: storage-initializer
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

AddedInterface

Add eth0 [10.134.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Killing

Stopping container kserve-container
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Unhealthy

Readiness probe failed: Get "http://10.134.0.26:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.134.0.26:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-mggxj

Unhealthy

Readiness probe failed: Get "https://10.134.0.26:8643/healthz": dial tcp 10.134.0.26:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Pulled

Container image "kserve/predictiveserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-lightgbm-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

AddedInterface

Add eth0 [10.134.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Unhealthy

Readiness probe failed: Get "https://10.134.0.27:8643/healthz": dial tcp 10.134.0.27:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Created

Created container: storage-initializer
(x6)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-5zxhh

Unhealthy

Readiness probe failed: Get "http://10.134.0.27:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.134.0.27:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Unhealthy

Readiness probe failed: Get "http://10.134.0.28:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.134.0.28:8080: connect: connection refused
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-scheduler": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-c477977d5

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-c477977d5-vnbg6

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-c477977d5 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Killing

Stopping container kserve-container
(x6)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-d8dbfbbb9

SuccessfulCreate

Created pod: isvc-sklearn-predictor-d8dbfbbb9-xgzx7

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-d8dbfbbb9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

AddedInterface

Add eth0 [10.133.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-f2xmz

Unhealthy

Readiness probe failed: Get "https://10.134.0.28:8643/healthz": dial tcp 10.134.0.28:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "sklearn-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "sklearn-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "sklearn-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-nv54z

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-nv54z

AddedInterface

Add eth0 [10.133.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Unhealthy

Readiness probe failed: Get "https://10.133.0.47:8643/healthz": dial tcp 10.133.0.47:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-d8dbfbbb9-xgzx7

Unhealthy

Readiness probe failed: dial tcp 10.133.0.47:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-65cd49579f from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-65cd49579f

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-65cd49579f-pgv28

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

AddedInterface

Add eth0 [10.133.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-nv54z

Unhealthy

Readiness probe failed: Get "https://10.133.0.48:8643/healthz": dial tcp 10.133.0.48:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Unhealthy

Readiness probe failed: dial tcp 10.133.0.49:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65cd49579f-pgv28

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

AddedInterface

Add eth0 [10.133.0.50/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-69755fbb9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-69755fbb9

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-69755fbb9-94sg8

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-69755fbb9-94sg8

AddedInterface

Add eth0 [10.133.0.51/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Unhealthy

Readiness probe failed: Get "http://10.133.0.50:8080/v2/models/isvc-sklearn-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-2zh69

Unhealthy

Readiness probe failed: Get "https://10.133.0.50:8643/healthz": dial tcp 10.133.0.50:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-mixed": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-mixed": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-7f8b779bc6 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-mixed-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-7f8b779bc6

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Unhealthy

Readiness probe failed: dial tcp 10.133.0.51:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-69755fbb9-94sg8

Unhealthy

Readiness probe failed: Get "https://10.133.0.51:8643/healthz": dial tcp 10.133.0.51:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

AddedInterface

Add eth0 [10.133.0.52/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-tbfcg

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

FailedMount

MountVolume.SetUp failed for volume "isvc-tensorflow-kube-rbac-proxy-sar-config" : failed to sync configmap cache: timed out waiting for the condition

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-tbfcg

AddedInterface

Add eth0 [10.134.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Unhealthy

Readiness probe failed: dial tcp 10.133.0.52:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-7f8b779bc6-h44hc

Unhealthy

Readiness probe failed: Get "https://10.133.0.52:8643/healthz": dial tcp 10.133.0.52:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 3.75s (3.75s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Unhealthy

Readiness probe failed: dial tcp 10.134.0.29:8080: connect: connection refused
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-tensorflow-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

AddedInterface

Add eth0 [10.134.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Pulled

Container image "tensorflow/serving:2.6.2" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Unhealthy

Readiness probe failed: dial tcp 10.134.0.30:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-tbfcg

Unhealthy

Readiness probe failed: Get "https://10.134.0.29:8643/healthz": dial tcp 10.134.0.29:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-2fxfg

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

UpdateFailed

Failed to update status for InferenceService "isvc-triton": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-triton": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-2fxfg

AddedInterface

Add eth0 [10.133.0.53/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-gbfnr

Unhealthy

Readiness probe failed: Get "https://10.134.0.30:8643/healthz": dial tcp 10.134.0.30:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m54.27s (1m54.27s including waiting). Image size: 12907074623 bytes.
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Unhealthy

Readiness probe failed: dial tcp 10.133.0.53:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-2fxfg

Unhealthy

Readiness probe failed: Get "https://10.133.0.53:8643/healthz": dial tcp 10.133.0.53:8643: connect: connection refused

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-l8dqn

AddedInterface

Add eth0 [10.134.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-l8dqn

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 19.604s (19.604s including waiting). Image size: 1306229499 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Unhealthy

Readiness probe failed: dial tcp 10.134.0.31:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-l8dqn

Unhealthy

Readiness probe failed: Get "https://10.134.0.31:8643/healthz": dial tcp 10.134.0.31:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

AddedInterface

Add eth0 [10.133.0.54/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-hk67v

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "xgboost-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

AddedInterface

Add eth0 [10.133.0.55/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Unhealthy

Readiness probe failed: Get "http://10.133.0.54:8080/v2/models/isvc-xgboost-v2-mlserver/ready": dial tcp 10.133.0.54:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7nnfv

Unhealthy

Readiness probe failed: Get "https://10.133.0.54:8643/healthz": dial tcp 10.133.0.54:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-swtpp

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-swtpp

AddedInterface

Add eth0 [10.134.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-hk67v

Unhealthy

Readiness probe failed: Get "https://10.133.0.55:8643/healthz": dial tcp 10.133.0.55:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

AddedInterface

Add eth0 [10.133.0.56/23] from ovn-kubernetes
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.32:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-swtpp

Unhealthy

Readiness probe failed: Get "https://10.134.0.32:8643/healthz": dial tcp 10.134.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

AddedInterface

Add eth0 [10.134.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-q7hxv

Unhealthy

Readiness probe failed: Get "https://10.133.0.56:8643/healthz": dial tcp 10.133.0.56:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-88457d696 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-88457d696

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-88457d696-jcz4m

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-88457d696-jcz4m

AddedInterface

Add eth0 [10.133.0.57/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Unhealthy

Readiness probe failed: Get "https://10.134.0.33:8643/healthz": dial tcp 10.134.0.33:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-gvgd6

Unhealthy

Readiness probe failed: dial tcp 10.134.0.33:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Unhealthy

Readiness probe failed: dial tcp 10.133.0.57:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-5488974f76 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

AddedInterface

Add eth0 [10.133.0.58/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-88457d696-jcz4m

Unhealthy

Readiness probe failed: Get "https://10.133.0.57:8643/healthz": dial tcp 10.133.0.57:8643: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Unhealthy

Readiness probe failed: dial tcp 10.133.0.58:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-5488974f76-czh6x

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-54884788bb from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

AddedInterface

Add eth0 [10.133.0.59/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-global-fail-predictor-54884788bb-qvq2x_kserve-ci-e2e-test(c9035fd2-ffab-4ad5-899d-8f02d9c66079)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

AddedInterface

Add eth0 [10.133.0.60/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Unhealthy

Readiness probe failed: dial tcp 10.133.0.60:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7877ccc664-c8k2v

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

AddedInterface

Add eth0 [10.133.0.61/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-7d65b5b7cd-hqrvb

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-pass-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

AddedInterface

Add eth0 [10.133.0.62/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1293" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965 from 0 to 1
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Unhealthy

Readiness probe failed: dial tcp 10.133.0.62:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-c86b5bbcf-48nps

Unhealthy

Readiness probe failed: Get "https://10.133.0.62:8643/healthz": dial tcp 10.133.0.62:8643: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

AddedInterface

Add eth0 [10.133.0.63/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0cd196d4c53b891914316f18ab5cfa9f85258e057f3687e65332c70bf642d22d" already present on machine
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-5bc5655965-nssmd

Killing

Stopping container storage-initializer