Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-jhmvx to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-sn85p

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-sn85p to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-f7qtd to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-p9nkd

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-p9nkd to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-2fv97 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-l5j69

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-l5j69 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-9kzfb to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-n7sd5 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-w8r97

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-w8r97 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-vn98w to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-fqjht

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-fqjht to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-6grg2 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-5bd5d9979-hvskj to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-5cf96b68d-vjw9m

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-5cf96b68d-vjw9m to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-c4b9ff587-62qtx to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-gx9xl to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-587b88589b-bk975

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-587b88589b-bk975 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-7b48948c-mjqrd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-7b48948c-mjqrd to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69 to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj to ip-10-0-134-116.ec2.internal

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-587b88589b

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-587b88589b-bk975

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-587b88589b from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-587b88589b-bk975

AddedInterface

Add eth0 [10.133.0.21/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" in 3.265s (3.265s including waiting). Image size: 301160875 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" in 13.162s (13.163s including waiting). Image size: 1561011115 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.177s (2.177s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:3ccd79bac03ab2ef4e561ec5dd95857876389840b9f5d257961e6d8463f88534"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:3ccd79bac03ab2ef4e561ec5dd95857876389840b9f5d257961e6d8463f88534" in 2.449s (2.449s including waiting). Image size: 238035022 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Started

Started container agent
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-58c84cb6b9

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-58c84cb6b9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

AddedInterface

Add eth0 [10.133.0.22/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:3ccd79bac03ab2ef4e561ec5dd95857876389840b9f5d257961e6d8463f88534" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Created

Created container: kube-rbac-proxy
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Unhealthy

Readiness probe failed: Get "https://10.133.0.21:8643/healthz": dial tcp 10.133.0.21:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-587b88589b-bk975

Unhealthy

Readiness probe failed: dial tcp 10.133.0.21:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-p9nkd
(x2)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

UpdateFailed

Failed to update status for InferenceService "message-dumper": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-p9nkd

AddedInterface

Add eth0 [10.133.0.23/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 958ms (958ms including waiting). Image size: 14813193 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Started

Started container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

UpdateFailed

Failed to update status for InferenceService "isvc-logger": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-logger-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-7b48948c from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-7b48948c

SuccessfulCreate

Created pod: isvc-logger-predictor-7b48948c-mjqrd
(x8)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-logger-predictor-7b48948c-mjqrd

AddedInterface

Add eth0 [10.133.0.24/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:3ccd79bac03ab2ef4e561ec5dd95857876389840b9f5d257961e6d8463f88534" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Created

Created container: kserve-container
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Unhealthy

Readiness probe failed: dial tcp 10.133.0.22:5000: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-58c84cb6b9-c7f69

Unhealthy

Readiness probe failed: Get "https://10.133.0.22:8643/healthz": dial tcp 10.133.0.22:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Started

Started container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-p9nkd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-l5j69

AddedInterface

Add eth0 [10.133.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-l5j69

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Pulling

Pulling image "kserve/lgbserver:latest"
(x9)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 6.275s (6.275s including waiting). Image size: 606297871 bytes.
(x11)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Unhealthy

Readiness probe failed: dial tcp 10.133.0.24:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-7b48948c-mjqrd

Unhealthy

Readiness probe failed: Get "https://10.133.0.24:8643/healthz": dial tcp 10.133.0.24:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Unhealthy

Readiness probe failed: dial tcp 10.133.0.25:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

AddedInterface

Add eth0 [10.133.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5j69

Unhealthy

Readiness probe failed: Get "https://10.133.0.25:8643/healthz": dial tcp 10.133.0.25:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Pulled

Container image "kserve/lgbserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Killing

Stopping container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

AddedInterface

Add eth0 [10.133.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Unhealthy

Readiness probe failed: dial tcp 10.133.0.26:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-xg7nv

Unhealthy

Readiness probe failed: Get "https://10.133.0.26:8643/healthz": dial tcp 10.133.0.26:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m6.604s (2m6.604s including waiting). Image size: 10890461297 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x10)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-lcnsd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

AddedInterface

Add eth0 [10.133.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Unhealthy

Readiness probe failed: dial tcp 10.133.0.28:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-xbstf

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-mlflow-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

AddedInterface

Add eth0 [10.133.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-2kj85

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-mcp": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-c4b9ff587 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-c4b9ff587

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

AddedInterface

Add eth0 [10.133.0.30/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 2.714s (2.714s including waiting). Image size: 237801512 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Started

Started container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Created

Created container: kube-rbac-proxy
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

AddedInterface

Add eth0 [10.133.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

UpdateFailed

Failed to update status for InferenceService "isvc-paddle": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-gx9xl

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.814s (10.814s including waiting). Image size: 1162830075 bytes.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Unhealthy

Readiness probe failed: Get "http://10.133.0.30:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.133.0.30:8080: connect: connection refused
(x6)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-c4b9ff587-62qtx

Unhealthy

Readiness probe failed: Get "https://10.133.0.30:8643/healthz": dial tcp 10.133.0.30:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Unhealthy

Readiness probe failed: dial tcp 10.133.0.31:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-gx9xl

Unhealthy

Readiness probe failed: Get "https://10.133.0.31:8643/healthz": dial tcp 10.133.0.31:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

AddedInterface

Add eth0 [10.133.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-paddle-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

AddedInterface

Add eth0 [10.133.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Unhealthy

Readiness probe failed: dial tcp 10.133.0.32:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-bkqbx

Unhealthy

Readiness probe failed: Get "https://10.133.0.32:8643/healthz": dial tcp 10.133.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Pulled

Container image "kserve/paddleserver:latest" already present on machine
(x6)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Unhealthy

Readiness probe failed: dial tcp 10.133.0.33:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-cfqdx

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

UpdateFailed

Failed to update status for InferenceService "isvc-pmml": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-sn85p

AddedInterface

Add eth0 [10.133.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-sn85p

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 7.046s (7.046s including waiting). Image size: 800927094 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Unhealthy

Readiness probe failed: dial tcp 10.133.0.34:8080: connect: connection refused
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-pmml-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-f7qtd

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-sn85p

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-f7qtd

AddedInterface

Add eth0 [10.133.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

AddedInterface

Add eth0 [10.133.0.36/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Unhealthy

Readiness probe failed: Get "https://10.133.0.35:8643/healthz": dial tcp 10.133.0.35:8643: connect: connection refused
(x11)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-f7qtd

Unhealthy

Readiness probe failed: dial tcp 10.133.0.35:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-primary-0c0f7e-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-0c0f7e-predictor-66b6bd9685 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-0c0f7e

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-0c0f7e": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-0c0f7e

UpdateFailed

Failed to update status for InferenceService "isvc-primary-0c0f7e": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-0c0f7e": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-primary-0c0f7e-predictor-66b6bd9685

SuccessfulCreate

Created pod: isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

AddedInterface

Add eth0 [10.133.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Unhealthy

Readiness probe failed: dial tcp 10.133.0.36:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-bm857

Unhealthy

Readiness probe failed: Get "https://10.133.0.36:8643/healthz": dial tcp 10.133.0.36:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-0c0f7e-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-0c0f7e-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-0c0f7e

InferenceServiceReady

InferenceService [isvc-primary-0c0f7e] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-0c0f7e

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-0c0f7e

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-0c0f7e": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-0c0f7e-predictor-6bfc5d8786

SuccessfulCreate

Created pod: isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-0c0f7e

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-0c0f7e": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-0c0f7e": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-secondary-0c0f7e-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-0c0f7e-predictor-6bfc5d8786 from 0 to 1

kserve-ci-e2e-test

multus

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

AddedInterface

Add eth0 [10.133.0.38/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-secondary-0c0f7e-predictor-serving-cert" not found
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-0c0f7e-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-0c0f7e-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-0c0f7e-predictor-6bfc5d8786-ht5fw_kserve-ci-e2e-test(ebc668aa-c54d-4cce-8008-108e3b54aee5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-0c0f7e-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-0c0f7e-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-0c0f7e

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-c1d19b

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-c1d19b": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-c1d19b": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-c1d19b-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-c1d19b-predictor-76c7889f6 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-init-fail-c1d19b-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-c1d19b

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-c1d19b": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-c1d19b-predictor-76c7889f6

SuccessfulCreate

Created pod: isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc
(x9)

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Unhealthy

Readiness probe failed: dial tcp 10.133.0.37:8080: connect: connection refused

kserve-ci-e2e-test

multus

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

AddedInterface

Add eth0 [10.133.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-0c0f7e-predictor-66b6bd9685-cfmfj

Unhealthy

Readiness probe failed: Get "https://10.133.0.37:8643/healthz": dial tcp 10.133.0.37:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-c1d19b

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

AddedInterface

Add eth0 [10.133.0.40/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-init-fail-c1d19b-predictor-76c7889f6-mfqwc

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 29.523s (29.523s including waiting). Image size: 2324227435 bytes.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.40:8080: connect: connection refused
(x9)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Unhealthy

Readiness probe failed: Get "https://10.133.0.40:8643/healthz": dial tcp 10.133.0.40:8643: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-xgboost-predictor-serving-cert" not found
(x5)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c5kzw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

AddedInterface

Add eth0 [10.133.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Pulled

Container image "kserve/predictiveserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-lightgbm-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

AddedInterface

Add eth0 [10.133.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Unhealthy

Readiness probe failed: Get "https://10.133.0.41:8643/healthz": dial tcp 10.133.0.41:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-5rpj7

Unhealthy

Readiness probe failed: dial tcp 10.133.0.41:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

AddedInterface

Add eth0 [10.133.0.43/23] from ovn-kubernetes
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.42:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-fzvxw

Unhealthy

Readiness probe failed: Get "https://10.133.0.42:8643/healthz": dial tcp 10.133.0.42:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Created

Created container: storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Unhealthy

Readiness probe failed: Get "http://10.133.0.43:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.133.0.43:8080: connect: connection refused

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

AddedInterface

Add eth0 [10.133.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-hvnxb

Unhealthy

Readiness probe failed: Get "https://10.133.0.43:8643/healthz": dial tcp 10.133.0.43:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

AddedInterface

Add eth0 [10.133.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Unhealthy

Readiness probe failed: Get "https://10.133.0.44:8643/healthz": dial tcp 10.133.0.44:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-p87zp

Unhealthy

Readiness probe failed: Get "http://10.133.0.44:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.133.0.44:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Unhealthy

Readiness probe failed: Get "http://10.133.0.45:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.133.0.45:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-8d8d684fd

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-8d8d684fd-rrrjs

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-scheduler": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again
(x5)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-8d8d684fd from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-5cf96b68d from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-5cf96b68d

SuccessfulCreate

Created pod: isvc-sklearn-predictor-5cf96b68d-vjw9m

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-vszp2

Unhealthy

Readiness probe failed: Get "https://10.133.0.45:8643/healthz": dial tcp 10.133.0.45:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-5cf96b68d-vjw9m

AddedInterface

Add eth0 [10.133.0.46/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-jhmvx

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "sklearn-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "sklearn-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "sklearn-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

AddedInterface

Add eth0 [10.133.0.47/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Started

Started container storage-initializer
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Unhealthy

Readiness probe failed: dial tcp 10.133.0.46:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-5cf96b68d-vjw9m

Unhealthy

Readiness probe failed: Get "https://10.133.0.46:8643/healthz": dial tcp 10.133.0.46:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-d64d94cdb from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-d64d94cdb

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

AddedInterface

Add eth0 [10.133.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-jhmvx

Unhealthy

Readiness probe failed: Get "https://10.133.0.47:8643/healthz": dial tcp 10.133.0.47:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Unhealthy

Readiness probe failed: dial tcp 10.133.0.48:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-d64d94cdb-dmwhw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

AddedInterface

Add eth0 [10.133.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-7c66c8c59d

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-7c66c8c59d from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

AddedInterface

Add eth0 [10.133.0.50/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-swr25

Unhealthy

Readiness probe failed: Get "https://10.133.0.49:8643/healthz": dial tcp 10.133.0.49:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-mixed-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-mixed": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-mixed": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-748989b567 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-748989b567

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

AddedInterface

Add eth0 [10.133.0.51/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Unhealthy

Readiness probe failed: Get "https://10.133.0.50:8643/healthz": dial tcp 10.133.0.50:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c66c8c59d-6k9jb

Unhealthy

Readiness probe failed: dial tcp 10.133.0.50:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-fqjht

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-fqjht

AddedInterface

Add eth0 [10.133.0.52/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Unhealthy

Readiness probe failed: Get "https://10.133.0.51:8643/healthz": dial tcp 10.133.0.51:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-748989b567-ggmcl

Unhealthy

Readiness probe failed: dial tcp 10.133.0.51:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 3.851s (3.851s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Unhealthy

Readiness probe failed: dial tcp 10.133.0.52:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

AddedInterface

Add eth0 [10.133.0.53/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Pulled

Container image "tensorflow/serving:2.6.2" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Unhealthy

Readiness probe failed: dial tcp 10.133.0.53:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-fqjht

Unhealthy

Readiness probe failed: Get "https://10.133.0.52:8643/healthz": dial tcp 10.133.0.52:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

UpdateFailed

Failed to update status for InferenceService "isvc-triton": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-triton": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-w8r97

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-w8r97

AddedInterface

Add eth0 [10.133.0.54/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-vn98w

Unhealthy

Readiness probe failed: Get "https://10.133.0.53:8643/healthz": dial tcp 10.133.0.53:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m54.797s (1m54.797s including waiting). Image size: 12907074623 bytes.

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Unhealthy

Readiness probe failed: dial tcp 10.133.0.54:8080: connect: connection refused
(x9)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-n7sd5

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-w8r97

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-n7sd5

AddedInterface

Add eth0 [10.133.0.55/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 18.598s (18.598s including waiting). Image size: 1306417402 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Killing

Stopping container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Killing

Stopping container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

AddedInterface

Add eth0 [10.133.0.56/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Unhealthy

Readiness probe failed: Get "https://10.133.0.55:8643/healthz": dial tcp 10.133.0.55:8643: connect: connection refused
(x8)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-n7sd5

Unhealthy

Readiness probe failed: dial tcp 10.133.0.55:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-6grg2

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

AddedInterface

Add eth0 [10.133.0.57/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Unhealthy

Readiness probe failed: Get "http://10.133.0.56:8080/v2/models/isvc-xgboost-v2-mlserver/ready": dial tcp 10.133.0.56:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-dzn9n

Unhealthy

Readiness probe failed: Get "https://10.133.0.56:8643/healthz": dial tcp 10.133.0.56:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-9kzfb

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

AddedInterface

Add eth0 [10.133.0.58/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Unhealthy

Readiness probe failed: Get "http://10.133.0.57:8080/v2/models/xgboost-v2-mlserver/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-6grg2

Unhealthy

Readiness probe failed: Get "https://10.133.0.57:8643/healthz": dial tcp 10.133.0.57:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Created

Created container: storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

AddedInterface

Add eth0 [10.133.0.59/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl
(x8)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Unhealthy

Readiness probe failed: dial tcp 10.133.0.58:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-9kzfb

Unhealthy

Readiness probe failed: Get "https://10.133.0.58:8643/healthz": dial tcp 10.133.0.58:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

AddedInterface

Add eth0 [10.133.0.60/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-mb4hl

Unhealthy

Readiness probe failed: Get "https://10.133.0.59:8643/healthz": dial tcp 10.133.0.59:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

AddedInterface

Add eth0 [10.133.0.61/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-5bd5d9979 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-5bd5d9979

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-5bd5d9979-hvskj

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Created

Created container: kserve-container
(x8)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Unhealthy

Readiness probe failed: dial tcp 10.133.0.60:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-2fv97

Unhealthy

Readiness probe failed: Get "https://10.133.0.60:8643/healthz": dial tcp 10.133.0.60:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-pass-predictor-serving-cert" not found
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Unhealthy

Readiness probe failed: dial tcp 10.133.0.61:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Unhealthy

Readiness probe failed: Get "https://10.133.0.61:8643/healthz": dial tcp 10.133.0.61:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-5bd5d9979-hvskj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

AddedInterface

Add eth0 [10.133.0.62/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Unhealthy

Readiness probe failed: Get "https://10.133.0.62:8643/healthz": dial tcp 10.133.0.62:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Unhealthy

Readiness probe failed: dial tcp 10.133.0.62:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-786965bbc5-srlbx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

AddedInterface

Add eth0 [10.133.0.63/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-84c4d8bb85-4kktv

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

AddedInterface

Add eth0 [10.133.0.64/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Unhealthy

Readiness probe failed: dial tcp 10.133.0.64:8080: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-6c99d9597f-crcgr

Killing

Stopping container kube-rbac-proxy
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

AddedInterface

Add eth0 [10.133.0.65/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-custom-fail-predictor-6d9bf6b78f-pmwjk_kserve-ci-e2e-test(34ecaeee-be23-427c-977c-6305b29351ba)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

AddedInterface

Add eth0 [10.133.0.66/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1443" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Unhealthy

Readiness probe failed: dial tcp 10.133.0.66:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-69798956bb-k5lvk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

AddedInterface

Add eth0 [10.133.0.67/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:4ae4df75945dd799b0c11b5e37a48a5ba8230ff3507d345ea07c8897a8c39885" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-serving-fail-predictor-6cb95fcd5b-8qwqn_kserve-ci-e2e-test(40d0928e-3147-4e4c-b8cb-05190a90be95)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.