Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-5d9949bc59-68dwn to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-rfwrh to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-ksmbq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-ksmbq to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-jqv9h to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-xjcl5 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-9qb2p

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-9qb2p to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-r8d87 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-9rkxl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-9rkxl to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-b4bhx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-b4bhx to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-6d65749c76-lkj5l to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-6d65c564d6-29jll to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-b76fc9db7-66dds to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-qz6kx to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-65764ccccd-rl7nc to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-7ffcf8d567-9khc6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-7ffcf8d567-9khc6 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-nc275 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-zqttc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-zqttc to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2 to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-wrwkg

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-wrwkg to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-759d546688-cwt2z

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-759d546688-cwt2z to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-2ckrz to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72 to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-60b405-predictor-b697f8fbf-pgxzp to ip-10-0-137-187.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl to ip-10-0-139-40.ec2.internal

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-6d65749c76 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-6d65749c76

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

AddedInterface

Add eth0 [10.132.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" in 4.149s (4.149s including waiting). Image size: 301492121 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" in 13.803s (13.803s including waiting). Image size: 1562812843 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.312s (2.312s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:d5d470ad0a2dbd76829def7f459909c05cc870ffb330e4b585204605f3c490d9"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:d5d470ad0a2dbd76829def7f459909c05cc870ffb330e4b585204605f3c490d9" in 2.764s (2.764s including waiting). Image size: 238035008 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Started

Started container agent
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-667c84d549

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-batcher-custom-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-667c84d549 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

AddedInterface

Add eth0 [10.132.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:d5d470ad0a2dbd76829def7f459909c05cc870ffb330e4b585204605f3c490d9" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Started

Started container agent
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Unhealthy

Readiness probe failed: dial tcp 10.132.0.29:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-6d65749c76-lkj5l

Unhealthy

Readiness probe failed: Get "https://10.132.0.29:8643/healthz": dial tcp 10.132.0.29:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "message-dumper-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-wrwkg

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

UpdateFailed

Failed to update status for InferenceService "message-dumper": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-wrwkg

AddedInterface

Add eth0 [10.133.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 1.114s (1.114s including waiting). Image size: 14813193 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.195s (2.195s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Started

Started container kube-rbac-proxy
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x9)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Unhealthy

Readiness probe failed: Get "https://10.132.0.30:8643/healthz": dial tcp 10.132.0.30:8643: connect: connection refused
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-667c84d549-5nhcr

Unhealthy

Readiness probe failed: dial tcp 10.132.0.30:5000: connect: connection refused

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-7ffcf8d567

SuccessfulCreate

Created pod: isvc-logger-predictor-7ffcf8d567-9khc6

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-logger-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

UpdateFailed

Failed to update status for InferenceService "isvc-logger": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-7ffcf8d567 from 0 to 1

kserve-ci-e2e-test

multus

isvc-logger-predictor-7ffcf8d567-9khc6

AddedInterface

Add eth0 [10.132.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:d5d470ad0a2dbd76829def7f459909c05cc870ffb330e4b585204605f3c490d9" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Killing

Stopping container agent

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-9rkxl

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wrwkg

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519"

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-9rkxl

AddedInterface

Add eth0 [10.133.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" in 3.115s (3.115s including waiting). Image size: 301492121 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Pulling

Pulling image "kserve/lgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 6.239s (6.239s including waiting). Image size: 606093581 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Started

Started container kube-rbac-proxy
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Unhealthy

Readiness probe failed: dial tcp 10.132.0.31:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x4)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7ffcf8d567-9khc6

Unhealthy

Readiness probe failed: Get "https://10.132.0.31:8643/healthz": dial tcp 10.132.0.31:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

AddedInterface

Add eth0 [10.133.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Created

Created container: storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Unhealthy

Readiness probe failed: dial tcp 10.133.0.26:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-9rkxl

Unhealthy

Readiness probe failed: Get "https://10.133.0.26:8643/healthz": dial tcp 10.133.0.26:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Killing

Stopping container kserve-container
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

AddedInterface

Add eth0 [10.133.0.28/23] from ovn-kubernetes
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Unhealthy

Readiness probe failed: dial tcp 10.133.0.27:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-nc275

Unhealthy

Readiness probe failed: Get "https://10.133.0.27:8643/healthz": dial tcp 10.133.0.27:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m14.659s (2m14.659s including waiting). Image size: 10890461297 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Created

Created container: kserve-container
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz
(x12)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1
(x12)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-tdbmk

Unhealthy

Readiness probe failed: Get "https://10.133.0.28:8643/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

AddedInterface

Add eth0 [10.133.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Unhealthy

Readiness probe failed: dial tcp 10.133.0.29:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

AddedInterface

Add eth0 [10.133.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-h2ckz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-4c4mx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-b76fc9db7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-mcp": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

AddedInterface

Add eth0 [10.132.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-b76fc9db7

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-b76fc9db7-66dds

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 1.663s (1.663s including waiting). Image size: 237662759 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Started

Started container kserve-agent
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

AddedInterface

Add eth0 [10.133.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-qz6kx

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

UpdateFailed

Failed to update status for InferenceService "isvc-paddle": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.711s (10.711s including waiting). Image size: 1162827001 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Unhealthy

Readiness probe failed: Get "http://10.132.0.32:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.132.0.32:8080: connect: connection refused
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-b76fc9db7-66dds

Unhealthy

Readiness probe failed: Get "https://10.132.0.32:8643/healthz": dial tcp 10.132.0.32:8643: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Started

Started container storage-initializer
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Unhealthy

Readiness probe failed: dial tcp 10.133.0.31:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-qz6kx

Unhealthy

Readiness probe failed: Get "https://10.133.0.31:8643/healthz": dial tcp 10.133.0.31:8643: connect: connection refused

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

AddedInterface

Add eth0 [10.133.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

AddedInterface

Add eth0 [10.133.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Unhealthy

Readiness probe failed: dial tcp 10.133.0.32:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-r8b64

Unhealthy

Readiness probe failed: Get "https://10.133.0.32:8643/healthz": dial tcp 10.133.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-ksmbq

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

UpdateFailed

Failed to update status for InferenceService "isvc-pmml": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-ksmbq

AddedInterface

Add eth0 [10.132.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Unhealthy

Readiness probe failed: Get "https://10.133.0.33:8643/healthz": dial tcp 10.133.0.33:8643: connect: connection refused
(x8)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-4jxs6

Unhealthy

Readiness probe failed: dial tcp 10.133.0.33:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.72s (6.72s including waiting). Image size: 800924023 bytes.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Unhealthy

Readiness probe failed: dial tcp 10.132.0.33:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-zqttc

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-zqttc

AddedInterface

Add eth0 [10.133.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-ksmbq

Unhealthy

Readiness probe failed: Get "https://10.132.0.33:8643/healthz": dial tcp 10.132.0.33:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.915s (6.915s including waiting). Image size: 800924023 bytes.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Unhealthy

Readiness probe failed: dial tcp 10.133.0.34:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-zqttc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-pmml-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

AddedInterface

Add eth0 [10.132.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-primary-60b405-predictor-b697f8fbf

SuccessfulCreate

Created pod: isvc-primary-60b405-predictor-b697f8fbf-pgxzp

kserve-ci-e2e-test

deployment-controller

isvc-primary-60b405-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-60b405-predictor-b697f8fbf from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-primary-60b405-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-60b405

UpdateFailed

Failed to update status for InferenceService "isvc-primary-60b405": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-60b405": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-60b405

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-60b405": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Started

Started container storage-initializer
(x11)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Unhealthy

Readiness probe failed: dial tcp 10.132.0.34:8080: connect: connection refused

kserve-ci-e2e-test

multus

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

AddedInterface

Add eth0 [10.132.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-8px72

Unhealthy

Readiness probe failed: Get "https://10.132.0.34:8643/healthz": dial tcp 10.132.0.34:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-60b405-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-60b405-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Unhealthy

Readiness probe failed: dial tcp 10.132.0.35:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-60b405

InferenceServiceReady

InferenceService [isvc-primary-60b405] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-60b405

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-secondary-60b405-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-secondary-60b405-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-60b405-predictor-685bf5b5fd from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-60b405

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-60b405": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-60b405": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-60b405

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-60b405": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-60b405-predictor-685bf5b5fd

SuccessfulCreate

Created pod: isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

kserve-ci-e2e-test

multus

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

AddedInterface

Add eth0 [10.132.0.36/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

Created

Created container: storage-initializer
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-60b405-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-60b405-predictor-685bf5b5fd-zsvqq_kserve-ci-e2e-test(b8a2fddf-2b57-4512-bf66-0e002570b224)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-60b405-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-60b405

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-60b405-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-60b405-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-a0e2f1-predictor-5559d5df4d

SuccessfulCreate

Created pod: isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-a0e2f1

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-a0e2f1": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-a0e2f1-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-a0e2f1-predictor-5559d5df4d from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-a0e2f1

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-a0e2f1": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-a0e2f1": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

AddedInterface

Add eth0 [10.132.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-60b405-predictor-b697f8fbf-pgxzp

Unhealthy

Readiness probe failed: Get "https://10.132.0.35:8643/healthz": dial tcp 10.132.0.35:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-a0e2f1

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-init-fail-a0e2f1-predictor-5559d5df4d-4h9sx

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

AddedInterface

Add eth0 [10.133.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 23.292s (23.292s including waiting). Image size: 2326145393 bytes.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Unhealthy

Readiness probe failed: dial tcp 10.133.0.35:8080: connect: connection refused
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

AddedInterface

Add eth0 [10.133.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Unhealthy

Readiness probe failed: dial tcp 10.133.0.35:8080: i/o timeout

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-l8pf2

Unhealthy

Readiness probe failed: Get "https://10.133.0.35:8643/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Killing

Stopping container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-lightgbm-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

AddedInterface

Add eth0 [10.133.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Unhealthy

Readiness probe failed: Get "https://10.133.0.36:8643/healthz": dial tcp 10.133.0.36:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-s6wvk

Unhealthy

Readiness probe failed: dial tcp 10.133.0.36:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Started

Started container storage-initializer
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

AddedInterface

Add eth0 [10.133.0.38/23] from ovn-kubernetes
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Unhealthy

Readiness probe failed: Get "https://10.133.0.37:8643/healthz": dial tcp 10.133.0.37:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-mscvm

Unhealthy

Readiness probe failed: dial tcp 10.133.0.37:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-xgboost-v2-predictor-serving-cert" not found
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

AddedInterface

Add eth0 [10.133.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Unhealthy

Readiness probe failed: Get "http://10.133.0.38:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.133.0.38:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-9vt2j

Unhealthy

Readiness probe failed: Get "https://10.133.0.38:8643/healthz": dial tcp 10.133.0.38:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Killing

Stopping container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

AddedInterface

Add eth0 [10.133.0.40/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Created

Created container: storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Unhealthy

Readiness probe failed: Get "http://10.133.0.39:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.133.0.39:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-mqtrl

Unhealthy

Readiness probe failed: Get "https://10.133.0.39:8643/healthz": dial tcp 10.133.0.39:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-7c9c7d9946

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-7c9c7d9946-rch6s

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-7c9c7d9946 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-scheduler": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Killing

Stopping container kserve-container
(x6)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-759d546688 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-759d546688

SuccessfulCreate

Created pod: isvc-sklearn-predictor-759d546688-cwt2z

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Unhealthy

Readiness probe failed: Get "https://10.133.0.40:8643/healthz": dial tcp 10.133.0.40:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-2hfzm

Unhealthy

Readiness probe failed: Get "http://10.133.0.40:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.133.0.40:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-759d546688-cwt2z

AddedInterface

Add eth0 [10.132.0.38/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

AddedInterface

Add eth0 [10.133.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "sklearn-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "sklearn-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-rfwrh

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Unhealthy

Readiness probe failed: Get "https://10.132.0.38:8643/healthz": dial tcp 10.132.0.38:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-759d546688-cwt2z

Unhealthy

Readiness probe failed: dial tcp 10.132.0.38:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Started

Started container storage-initializer
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

AddedInterface

Add eth0 [10.132.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-65764ccccd

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-65764ccccd-rl7nc
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-65764ccccd from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-rfwrh

Unhealthy

Readiness probe failed: Get "https://10.133.0.41:8643/healthz": dial tcp 10.133.0.41:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Unhealthy

Readiness probe failed: dial tcp 10.132.0.39:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

AddedInterface

Add eth0 [10.133.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-65764ccccd-rl7nc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-6d65c564d6 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-6d65c564d6

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-6d65c564d6-29jll

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-6d65c564d6-29jll

AddedInterface

Add eth0 [10.132.0.40/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Unhealthy

Readiness probe failed: Get "http://10.133.0.42:8080/v2/models/isvc-sklearn-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-hnnbs

Unhealthy

Readiness probe failed: Get "https://10.133.0.42:8643/healthz": dial tcp 10.133.0.42:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-mixed-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-6b4bf45459 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-6b4bf45459

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-mixed": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-mixed": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

AddedInterface

Add eth0 [10.132.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Unhealthy

Readiness probe failed: Get "https://10.132.0.40:8643/healthz": dial tcp 10.132.0.40:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-6d65c564d6-29jll

Unhealthy

Readiness probe failed: dial tcp 10.132.0.40:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-tensorflow-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-b4bhx

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-b4bhx

AddedInterface

Add eth0 [10.133.0.43/23] from ovn-kubernetes
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Unhealthy

Readiness probe failed: dial tcp 10.132.0.41:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-6b4bf45459-t6gnt

Unhealthy

Readiness probe failed: Get "https://10.132.0.41:8643/healthz": dial tcp 10.132.0.41:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 3.915s (3.915s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Unhealthy

Readiness probe failed: dial tcp 10.133.0.43:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-tensorflow-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

AddedInterface

Add eth0 [10.132.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 4.374s (4.374s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Unhealthy

Readiness probe failed: dial tcp 10.132.0.42:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-b4bhx

Unhealthy

Readiness probe failed: Get "https://10.133.0.43:8643/healthz": dial tcp 10.133.0.43:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-9qb2p

AddedInterface

Add eth0 [10.133.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-9qb2p

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

UpdateFailed

Failed to update status for InferenceService "isvc-triton": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-triton": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-r8d87

Unhealthy

Readiness probe failed: Get "https://10.132.0.42:8643/healthz": dial tcp 10.132.0.42:8643: connect: connection refused
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m55.12s (1m55.12s including waiting). Image size: 12907074623 bytes.

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Unhealthy

Readiness probe failed: dial tcp 10.133.0.44:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-xjcl5
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-xjcl5

AddedInterface

Add eth0 [10.132.0.43/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-9qb2p

Unhealthy

Readiness probe failed: Get "https://10.133.0.44:8643/healthz": dial tcp 10.133.0.44:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 18.84s (18.84s including waiting). Image size: 1306414326 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Unhealthy

Readiness probe failed: dial tcp 10.132.0.43:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-xjcl5

Unhealthy

Readiness probe failed: Get "https://10.132.0.43:8643/healthz": dial tcp 10.132.0.43:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

AddedInterface

Add eth0 [10.133.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "xgboost-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

AddedInterface

Add eth0 [10.133.0.46/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Unhealthy

Readiness probe failed: Get "http://10.133.0.45:8080/v2/models/isvc-xgboost-v2-mlserver/ready": dial tcp 10.133.0.45:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-5kbn8

Unhealthy

Readiness probe failed: Get "https://10.133.0.45:8643/healthz": dial tcp 10.133.0.45:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-jqv9h

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

AddedInterface

Add eth0 [10.133.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-2ckrz

Unhealthy

Readiness probe failed: Get "https://10.133.0.46:8643/healthz": dial tcp 10.133.0.46:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 19.426s (19.426s including waiting). Image size: 1306414326 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Started

Started container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

AddedInterface

Add eth0 [10.133.0.48/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Unhealthy

Readiness probe failed: Get "https://10.133.0.47:8643/healthz": dial tcp 10.133.0.47:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-jqv9h

Unhealthy

Readiness probe failed: dial tcp 10.133.0.47:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-predictor-serving-cert" not found
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

AddedInterface

Add eth0 [10.133.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Unhealthy

Readiness probe failed: Get "http://10.133.0.48:8080/v2/models/isvc-xgboost-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-vlzsj

Unhealthy

Readiness probe failed: Get "https://10.133.0.48:8643/healthz": dial tcp 10.133.0.48:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-5d9949bc59

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-5d9949bc59-68dwn

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-5d9949bc59 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

AddedInterface

Add eth0 [10.132.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Created

Created container: kserve-container
(x10)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Unhealthy

Readiness probe failed: dial tcp 10.133.0.49:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-qc8j8

Unhealthy

Readiness probe failed: Get "https://10.133.0.49:8643/healthz": dial tcp 10.133.0.49:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-86f54c547 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Unhealthy

Readiness probe failed: Get "https://10.132.0.44:8643/healthz": dial tcp 10.132.0.44:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5d9949bc59-68dwn

Unhealthy

Readiness probe failed: dial tcp 10.132.0.44:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-pass-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

AddedInterface

Add eth0 [10.132.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Unhealthy

Readiness probe failed: dial tcp 10.132.0.45:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-86f54c547-k8fcn

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

AddedInterface

Add eth0 [10.132.0.46/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-global-fail-predictor-6cc5fddf57-2wcnk_kserve-ci-e2e-test(7c6437b7-859c-4262-9bf0-4a43c225af01)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-custom-pass-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

AddedInterface

Add eth0 [10.132.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

AddedInterface

Add eth0 [10.132.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65 from 0 to 1
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Unhealthy

Readiness probe failed: dial tcp 10.132.0.47:8080: connect: connection refused

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-8498c4f6bc-9x26t

Unhealthy

Readiness probe failed: Get "https://10.132.0.47:8643/healthz": dial tcp 10.132.0.47:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

Started

Started container storage-initializer
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-custom-fail-predictor-599d4bdb65-jg562_kserve-ci-e2e-test(9f85c2a9-5ec8-42b1-8d3b-bcf549539786)

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

AddedInterface

Add eth0 [10.132.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1436" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Unhealthy

Readiness probe failed: dial tcp 10.132.0.49:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-d9cdb6474-675g9

Unhealthy

Readiness probe failed: Get "https://10.132.0.49:8643/healthz": dial tcp 10.132.0.49:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

AddedInterface

Add eth0 [10.132.0.50/23] from ovn-kubernetes
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:bfc8f044ee2a84e90b7b9876040a412b44d7b121a59a5fe7b22a946b15b61519" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-779d8c7cfb-4qdl5

Killing

Stopping container storage-initializer