Time Namespace Component RelatedObject Reason Message

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

sklearn-v2-mlserver-predictor-65d8664766-746bp

Scheduled

Successfully assigned kserve-ci-e2e-test/sklearn-v2-mlserver-predictor-65d8664766-746bp to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-primary-33ccfc-predictor-7689d4bb45-vh944 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-predictor-6b8b7cfb4b-wd9c5 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-predictor-bdf964bd-l5wbc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-predictor-bdf964bd-l5wbc to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

message-dumper-predictor-c7d86bcbd-wnhtc

Scheduled

Successfully assigned kserve-ci-e2e-test/message-dumper-predictor-c7d86bcbd-wnhtc to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-logger-predictor-d94d7847-vgjkb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-logger-predictor-d94d7847-vgjkb to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-batcher-predictor-66c65b668-gwk8x to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-runtime-predictor-67bc544947-kz8ss to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-runtime-predictor-779db84d9-6nqmk to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-xgboost-predictor-8689c4cfcc-p4qn9 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-triton-predictor-84bb65d94b-5qstx

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-triton-predictor-84bb65d94b-5qstx to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-tensorflow-predictor-6756f669d7-24gzd

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-tensorflow-predictor-6756f669d7-24gzd to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-predictor-7c9dd679db-q64wc to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Scheduled

Successfully assigned kserve-ci-e2e-test/xgboost-v2-mlserver-predictor-7799869d6f-vwxgn to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-s3-predictor-d954bcd99-79lq9 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-pmml-predictor-8bb578669-qq7sw

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-pmml-predictor-8bb578669-qq7sw to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-predictor-77f5c96b44-5d9wb

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-predictor-77f5c96b44-5d9wb to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Scheduled

Successfully assigned kserve-ci-e2e-test/isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5 to ip-10-0-139-5.ec2.internal

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-predictor-66c65b668 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

AddedInterface

Add eth0 [10.134.0.19/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-predictor-66c65b668

SuccessfulCreate

Created pod: isvc-sklearn-batcher-predictor-66c65b668-gwk8x

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulling

Pulling image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" in 3.011s (3.011s including waiting). Image size: 299845049 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulling

Pulling image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulled

Successfully pulled image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" in 13.445s (13.445s including waiting). Image size: 1560926126 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulling

Pulling image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulling

Pulling image "quay.io/opendatahub/kserve-agent@sha256:f72c005cb705ef76fbeae81cf78e50cb05f1674ac01bdd8893cf8fb48213f3d0"

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulled

Successfully pulled image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" in 2.215s (2.215s including waiting). Image size: 211946088 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent@sha256:f72c005cb705ef76fbeae81cf78e50cb05f1674ac01bdd8893cf8fb48213f3d0" in 5.81s (5.81s including waiting). Image size: 238035033 bytes.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

InferenceServiceReady

InferenceService [isvc-sklearn-batcher] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-batcher-custom": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-batcher-custom": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-batcher-custom-predictor-7cdbc5689

SuccessfulCreate

Created pod: isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-batcher-custom-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-batcher-custom-predictor-7cdbc5689 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Killing

Stopping container agent

kserve-ci-e2e-test

multus

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

AddedInterface

Add eth0 [10.134.0.20/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Started

Started container agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:f72c005cb705ef76fbeae81cf78e50cb05f1674ac01bdd8893cf8fb48213f3d0" already present on machine
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x5)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Unhealthy

Readiness probe failed: Get "https://10.134.0.19:8643/healthz": dial tcp 10.134.0.19:8643: connect: connection refused
(x11)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-predictor-66c65b668-gwk8x

Unhealthy

Readiness probe failed: dial tcp 10.134.0.19:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-batcher-custom

InferenceServiceReady

InferenceService [isvc-sklearn-batcher-custom] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-batcher-custom-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

message-dumper-predictor

ScalingReplicaSet

Scaled up replica set message-dumper-predictor-c7d86bcbd from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "message-dumper-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Killing

Stopping container agent

kserve-ci-e2e-test

replicaset-controller

message-dumper-predictor-c7d86bcbd

SuccessfulCreate

Created pod: message-dumper-predictor-c7d86bcbd-wnhtc

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

UpdateFailed

Failed to update status for InferenceService "message-dumper": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "message-dumper": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Pulling

Pulling image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display"

kserve-ci-e2e-test

multus

message-dumper-predictor-c7d86bcbd-wnhtc

AddedInterface

Add eth0 [10.134.0.21/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Pulled

Successfully pulled image "gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display" in 1.108s (1.108s including waiting). Image size: 14813193 bytes.

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

InferenceServiceReady

InferenceService [message-dumper] is Ready
(x8)

kserve-ci-e2e-test

v1beta1Controllers

message-dumper

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.20:5000: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

UpdateFailed

Failed to update status for InferenceService "isvc-logger": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-logger": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-logger-predictor

ScalingReplicaSet

Scaled up replica set isvc-logger-predictor-d94d7847 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-logger-predictor-d94d7847

SuccessfulCreate

Created pod: isvc-logger-predictor-d94d7847-vgjkb

kserve-ci-e2e-test

multus

isvc-logger-predictor-d94d7847-vgjkb

AddedInterface

Add eth0 [10.134.0.22/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Started

Started container storage-initializer
(x5)

kserve-ci-e2e-test

kubelet

isvc-sklearn-batcher-custom-predictor-7cdbc5689-hhwbp

Unhealthy

Readiness probe failed: Get "https://10.134.0.20:8643/healthz": dial tcp 10.134.0.20:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Pulled

Container image "quay.io/opendatahub/kserve-agent@sha256:f72c005cb705ef76fbeae81cf78e50cb05f1674ac01bdd8893cf8fb48213f3d0" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Created

Created container: agent

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Started

Started container agent
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

message-dumper-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-logger

InferenceServiceReady

InferenceService [isvc-logger] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-logger-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-predictor-bdf964bd from 0 to 1

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-predictor-bdf964bd

SuccessfulCreate

Created pod: isvc-lightgbm-predictor-bdf964bd-l5wbc

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Killing

Stopping container agent

kserve-ci-e2e-test

kubelet

message-dumper-predictor-c7d86bcbd-wnhtc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-lightgbm-predictor-bdf964bd-l5wbc

AddedInterface

Add eth0 [10.134.0.23/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Pulling

Pulling image "kserve/lgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Pulled

Successfully pulled image "kserve/lgbserver:latest" in 6.161s (6.161s including waiting). Image size: 606297871 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Started

Started container kube-rbac-proxy
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x10)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Unhealthy

Readiness probe failed: dial tcp 10.134.0.22:8080: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-logger-predictor-d94d7847-vgjkb

Unhealthy

Readiness probe failed: Get "https://10.134.0.22:8643/healthz": dial tcp 10.134.0.22:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Unhealthy

Readiness probe failed: dial tcp 10.134.0.23:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm

InferenceServiceReady

InferenceService [isvc-lightgbm] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Killing

Stopping container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-runtime-predictor-serving-cert" not found
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-runtime-predictor-749c4f6d58 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-runtime-predictor-749c4f6d58

SuccessfulCreate

Created pod: isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

AddedInterface

Add eth0 [10.134.0.24/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-predictor-bdf964bd-l5wbc

Unhealthy

Readiness probe failed: Get "https://10.134.0.23:8643/healthz": dial tcp 10.134.0.23:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-runtime] is Ready
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-runtime-predictor-8765c9667

SuccessfulCreate

Created pod: isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-runtime-predictor-8765c9667 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

AddedInterface

Add eth0 [10.134.0.25/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Unhealthy

Readiness probe failed: Get "https://10.134.0.24:8643/healthz": dial tcp 10.134.0.24:8643: connect: connection refused
(x10)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-runtime-predictor-749c4f6d58-bbswk

Unhealthy

Readiness probe failed: dial tcp 10.134.0.24:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Pulling

Pulling image "docker.io/seldonio/mlserver:1.7.1"

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Pulled

Successfully pulled image "docker.io/seldonio/mlserver:1.7.1" in 2m7.825s (2m7.825s including waiting). Image size: 10890461297 bytes.

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Started

Started container kube-rbac-proxy
(x11)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x11)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-lightgbm-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Unhealthy

Readiness probe failed: Get "https://10.134.0.25:8643/healthz": dial tcp 10.134.0.25:8643: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-lightgbm-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-lightgbm-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-lightgbm-v2-kserve-predictor-559bf6989 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-lightgbm-v2-kserve-predictor-559bf6989

SuccessfulCreate

Created pod: isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-runtime-predictor-8765c9667-cnmvv

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-lightgbm-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

AddedInterface

Add eth0 [10.134.0.26/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Pulled

Container image "kserve/lgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Unhealthy

Readiness probe failed: dial tcp 10.134.0.26:8080: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

InferenceServiceReady

InferenceService [isvc-lightgbm-v2-kserve] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-lightgbm-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-lightgbm-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-mlflow-v2-runtime-predictor-5fdb47d546

SuccessfulCreate

Created pod: isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-mlflow-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-mlflow-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-mlflow-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-lightgbm-v2-kserve-predictor-559bf6989-j269w

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-mlflow-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-mlflow-v2-runtime-predictor-5fdb47d546 from 0 to 1

kserve-ci-e2e-test

multus

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

AddedInterface

Add eth0 [10.134.0.27/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-mlflow-v2-runtime

InferenceServiceReady

InferenceService [isvc-mlflow-v2-runtime] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-mlflow-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-mcp-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-mlflow-v2-runtime-predictor-5fdb47d546-8btt4

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-mcp-predictor-74b9b7ddc5

SuccessfulCreate

Created pod: isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-mcp-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-mcp-predictor-74b9b7ddc5 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-mcp": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-mcp": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

AddedInterface

Add eth0 [10.134.0.28/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Pulling

Pulling image "quay.io/opendatahub/kserve-agent:latest"

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Pulled

Successfully pulled image "quay.io/opendatahub/kserve-agent:latest" in 2.317s (2.317s including waiting). Image size: 237801512 bytes.

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Started

Started container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Created

Created container: kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Created

Created container: kube-rbac-proxy
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-mcp

InferenceServiceReady

InferenceService [isvc-sklearn-mcp] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

UpdateFailed

Failed to update status for InferenceService "isvc-paddle": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

AddedInterface

Add eth0 [10.134.0.29/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Killing

Stopping container kserve-agent

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Started

Started container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-paddle-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-predictor-6b8b7cfb4b from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-predictor-6b8b7cfb4b

SuccessfulCreate

Created pod: isvc-paddle-predictor-6b8b7cfb4b-wd9c5

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Pulling

Pulling image "kserve/paddleserver:latest"

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Pulled

Successfully pulled image "kserve/paddleserver:latest" in 10.984s (10.984s including waiting). Image size: 1162830075 bytes.
(x4)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Unhealthy

Readiness probe failed: Get "http://10.134.0.28:8080/v1/models/isvc-sklearn-mcp": dial tcp 10.134.0.28:8080: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-sklearn-mcp-predictor-74b9b7ddc5-lhrpk

Unhealthy

Readiness probe failed: Get "https://10.134.0.28:8643/healthz": dial tcp 10.134.0.28:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle

InferenceServiceReady

InferenceService [isvc-paddle] is Ready

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-paddle-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-runtime-predictor-7f4d4f9dc8 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-runtime-predictor-7f4d4f9dc8

SuccessfulCreate

Created pod: isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

AddedInterface

Add eth0 [10.134.0.30/23] from ovn-kubernetes
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Unhealthy

Readiness probe failed: dial tcp 10.134.0.29:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-predictor-6b8b7cfb4b-wd9c5

Unhealthy

Readiness probe failed: Get "https://10.134.0.29:8643/healthz": dial tcp 10.134.0.29:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Pulled

Container image "kserve/paddleserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

InferenceServiceReady

InferenceService [isvc-paddle-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-paddle-v2-kserve-predictor-7dbd59854

SuccessfulCreate

Created pod: isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-paddle-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-paddle-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

AddedInterface

Add eth0 [10.134.0.31/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-paddle-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-paddle-v2-kserve-predictor-7dbd59854 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Unhealthy

Readiness probe failed: Get "https://10.134.0.30:8643/healthz": dial tcp 10.134.0.30:8643: connect: connection refused
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-runtime-predictor-7f4d4f9dc8-lrkjj

Unhealthy

Readiness probe failed: dial tcp 10.134.0.30:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Pulled

Container image "kserve/paddleserver:latest" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-paddle-v2-kserve

InferenceServiceReady

InferenceService [isvc-paddle-v2-kserve] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-paddle-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

UpdateFailed

Failed to update status for InferenceService "isvc-pmml": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-pmml-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-predictor-8bb578669 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-predictor-8bb578669

SuccessfulCreate

Created pod: isvc-pmml-predictor-8bb578669-qq7sw

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-predictor-8bb578669-qq7sw

AddedInterface

Add eth0 [10.134.0.32/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Created

Created container: storage-initializer
(x7)

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Unhealthy

Readiness probe failed: dial tcp 10.134.0.31:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-paddle-v2-kserve-predictor-7dbd59854-ls4w5

Unhealthy

Readiness probe failed: Get "https://10.134.0.31:8643/healthz": dial tcp 10.134.0.31:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Pulling

Pulling image "kserve/pmmlserver:latest"

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Pulled

Successfully pulled image "kserve/pmmlserver:latest" in 6.61s (6.61s including waiting). Image size: 800927094 bytes.

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Unhealthy

Readiness probe failed: dial tcp 10.134.0.32:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

InferenceServiceReady

InferenceService [isvc-pmml] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-pmml-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-runtime-predictor-67bc544947 from 0 to 1

kserve-ci-e2e-test

multus

isvc-pmml-runtime-predictor-67bc544947-kz8ss

AddedInterface

Add eth0 [10.134.0.33/23] from ovn-kubernetes

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-runtime-predictor-67bc544947

SuccessfulCreate

Created pod: isvc-pmml-runtime-predictor-67bc544947-kz8ss

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-predictor-8bb578669-qq7sw

Unhealthy

Readiness probe failed: Get "https://10.134.0.32:8643/healthz": dial tcp 10.134.0.32:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Unhealthy

Readiness probe failed: dial tcp 10.134.0.33:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

InferenceServiceReady

InferenceService [isvc-pmml-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-pmml-v2-kserve-predictor-6578f8ffc7

SuccessfulCreate

Created pod: isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-pmml-v2-kserve-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

deployment-controller

isvc-pmml-v2-kserve-predictor

ScalingReplicaSet

Scaled up replica set isvc-pmml-v2-kserve-predictor-6578f8ffc7 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

UpdateFailed

Failed to update status for InferenceService "isvc-pmml-v2-kserve": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-pmml-v2-kserve": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

AddedInterface

Add eth0 [10.134.0.34/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-pmml-runtime-predictor-67bc544947-kz8ss

Unhealthy

Readiness probe failed: Get "https://10.134.0.33:8643/healthz": dial tcp 10.134.0.33:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Pulled

Container image "kserve/pmmlserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Started

Started container kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x10)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Unhealthy

Readiness probe failed: dial tcp 10.134.0.34:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-pmml-v2-kserve

InferenceServiceReady

InferenceService [isvc-pmml-v2-kserve] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-pmml-v2-kserve-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-primary-33ccfc-predictor-serving-cert" not found

kserve-ci-e2e-test

replicaset-controller

isvc-primary-33ccfc-predictor-7689d4bb45

SuccessfulCreate

Created pod: isvc-primary-33ccfc-predictor-7689d4bb45-vh944

kserve-ci-e2e-test

deployment-controller

isvc-primary-33ccfc-predictor

ScalingReplicaSet

Scaled up replica set isvc-primary-33ccfc-predictor-7689d4bb45 from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-33ccfc

UpdateFailed

Failed to update status for InferenceService "isvc-primary-33ccfc": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-33ccfc": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-33ccfc

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-primary-33ccfc": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-pmml-v2-kserve-predictor-6578f8ffc7-hvjm4

Unhealthy

Readiness probe failed: Get "https://10.134.0.34:8643/healthz": dial tcp 10.134.0.34:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

AddedInterface

Add eth0 [10.134.0.35/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Created

Created container: kube-rbac-proxy
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-33ccfc-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-33ccfc-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x9)

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Unhealthy

Readiness probe failed: dial tcp 10.134.0.35:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-33ccfc

InferenceServiceReady

InferenceService [isvc-primary-33ccfc] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-primary-33ccfc

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-33ccfc

UpdateFailed

Failed to update status for InferenceService "isvc-secondary-33ccfc": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-33ccfc": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-secondary-33ccfc-predictor-9785c9d8b

SuccessfulCreate

Created pod: isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

kserve-ci-e2e-test

deployment-controller

isvc-secondary-33ccfc-predictor

ScalingReplicaSet

Scaled up replica set isvc-secondary-33ccfc-predictor-9785c9d8b from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-33ccfc

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-secondary-33ccfc": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

multus

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

AddedInterface

Add eth0 [10.134.0.36/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-33ccfc-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-primary-33ccfc-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt

BackOff

Back-off restarting failed container storage-initializer in pod isvc-secondary-33ccfc-predictor-9785c9d8b-r64mt_kserve-ci-e2e-test(0ed92957-a554-4f7a-b05e-757868f87520)
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-secondary-33ccfc

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-33ccfc-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-secondary-33ccfc-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-463e12

UpdateFailed

Failed to update status for InferenceService "isvc-init-fail-463e12": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-463e12": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Killing

Stopping container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-463e12

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-init-fail-463e12": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-init-fail-463e12-predictor

ScalingReplicaSet

Scaled up replica set isvc-init-fail-463e12-predictor-58bcf495d8 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-init-fail-463e12-predictor-58bcf495d8

SuccessfulCreate

Created pod: isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

kserve-ci-e2e-test

multus

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

AddedInterface

Add eth0 [10.134.0.37/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-primary-33ccfc-predictor-7689d4bb45-vh944

Unhealthy

Readiness probe failed: Get "https://10.134.0.35:8643/healthz": dial tcp 10.134.0.35:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-init-fail-463e12

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-predictor-cd7c759c9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6

BackOff

Back-off restarting failed container storage-initializer in pod isvc-init-fail-463e12-predictor-58bcf495d8-qzpv6_kserve-ci-e2e-test(410bd2b7-bd30-4c61-97ec-32c368502e45)

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-predictor-cd7c759c9

SuccessfulCreate

Created pod: isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

AddedInterface

Add eth0 [10.134.0.38/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Pulling

Pulling image "kserve/predictiveserver:latest"

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Pulled

Successfully pulled image "kserve/predictiveserver:latest" in 17.124s (17.124s including waiting). Image size: 2324227435 bytes.

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Started

Started container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x9)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Unhealthy

Readiness probe failed: dial tcp 10.134.0.38:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x10)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn

InferenceServiceReady

InferenceService [isvc-predictive-sklearn] is Ready

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

AddedInterface

Add eth0 [10.134.0.39/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Unhealthy

Readiness probe failed: Get "https://10.134.0.38:8643/healthz": dial tcp 10.134.0.38:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-predictor-cd7c759c9-c9l4z

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Created

Created container: storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-predictor-7ff98fd74d

SuccessfulCreate

Created pod: isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-predictor-7ff98fd74d from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost

InferenceServiceReady

InferenceService [isvc-predictive-xgboost] is Ready
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

AddedInterface

Add eth0 [10.134.0.40/23] from ovn-kubernetes
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-predictor-75cb94f9f from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-predictor-75cb94f9f

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Unhealthy

Readiness probe failed: Get "https://10.134.0.39:8643/healthz": dial tcp 10.134.0.39:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Started

Started container storage-initializer
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-predictor-7ff98fd74d-tbpm6

Unhealthy

Readiness probe failed: dial tcp 10.134.0.39:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-predictive-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-sklearn-v2-predictor-b5d4f6b79 from 0 to 1
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-sklearn-v2-predictor-b5d4f6b79

SuccessfulCreate

Created pod: isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again
(x10)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Unhealthy

Readiness probe failed: dial tcp 10.134.0.40:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

AddedInterface

Add eth0 [10.134.0.41/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-predictive-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-predictor-75cb94f9f-92wfl

Unhealthy

Readiness probe failed: Get "https://10.134.0.40:8643/healthz": dial tcp 10.134.0.40:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

InferenceServiceReady

InferenceService [isvc-predictive-sklearn-v2] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-predictive-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-xgboost-v2-predictor-6577c65fd8 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Killing

Stopping container kserve-container
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-xgboost-v2-predictor-6577c65fd8

SuccessfulCreate

Created pod: isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

kserve-ci-e2e-test

multus

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

AddedInterface

Add eth0 [10.134.0.42/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Unhealthy

Readiness probe failed: Get "https://10.134.0.41:8643/healthz": dial tcp 10.134.0.41:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-sklearn-v2-predictor-b5d4f6b79-dp7nj

Unhealthy

Readiness probe failed: Get "http://10.134.0.41:8080/v2/models/isvc-predictive-sklearn-v2/ready": dial tcp 10.134.0.41:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

InferenceServiceReady

InferenceService [isvc-predictive-xgboost-v2] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

replicaset-controller

isvc-predictive-lightgbm-v2-predictor-865b4598f7

SuccessfulCreate

Created pod: isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

kserve-ci-e2e-test

deployment-controller

isvc-predictive-lightgbm-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-predictive-lightgbm-v2-predictor-865b4598f7 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Created

Created container: storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

UpdateFailed

Failed to update status for InferenceService "isvc-predictive-lightgbm-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-predictive-lightgbm-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

multus

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

AddedInterface

Add eth0 [10.134.0.43/23] from ovn-kubernetes
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Unhealthy

Readiness probe failed: Get "https://10.134.0.42:8643/healthz": dial tcp 10.134.0.42:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-xgboost-v2-predictor-6577c65fd8-t5k85

Unhealthy

Readiness probe failed: Get "http://10.134.0.42:8080/v2/models/isvc-predictive-xgboost-v2/ready": dial tcp 10.134.0.42:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Pulled

Container image "kserve/predictiveserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-predictive-lightgbm-v2

InferenceServiceReady

InferenceService [isvc-predictive-lightgbm-v2] is Ready

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Killing

Stopping container kserve-container
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x5)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-scheduler-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-scheduler-predictor-d66cfffd6 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-scheduler-predictor-d66cfffd6

SuccessfulCreate

Created pod: isvc-sklearn-scheduler-predictor-d66cfffd6-k5blx

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-scheduler": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-scheduler

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-scheduler": the object has been modified; please apply your changes to the latest version and try again
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-predictive-lightgbm-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Unhealthy

Readiness probe failed: Get "https://10.134.0.43:8643/healthz": dial tcp 10.134.0.43:8643: connect: connection refused
(x5)

kserve-ci-e2e-test

kubelet

isvc-predictive-lightgbm-v2-predictor-865b4598f7-qtvnj

Unhealthy

Readiness probe failed: Get "http://10.134.0.43:8080/v2/models/isvc-predictive-lightgbm-v2/ready": dial tcp 10.134.0.43:8080: connect: connection refused

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-predictor-77f5c96b44 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-predictor-77f5c96b44

SuccessfulCreate

Created pod: isvc-sklearn-predictor-77f5c96b44-5d9wb

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-predictor-77f5c96b44-5d9wb

AddedInterface

Add eth0 [10.134.0.44/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

InferenceServiceReady

InferenceService [isvc-sklearn] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "sklearn-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "sklearn-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

sklearn-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set sklearn-v2-mlserver-predictor-65d8664766 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

sklearn-v2-mlserver-predictor-65d8664766

SuccessfulCreate

Created pod: sklearn-v2-mlserver-predictor-65d8664766-746bp

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

sklearn-v2-mlserver-predictor-65d8664766-746bp

AddedInterface

Add eth0 [10.134.0.45/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Unhealthy

Readiness probe failed: Get "https://10.134.0.44:8643/healthz": dial tcp 10.134.0.44:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-predictor-77f5c96b44-5d9wb

Unhealthy

Readiness probe failed: dial tcp 10.134.0.44:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

InferenceServiceReady

InferenceService [sklearn-v2-mlserver] is Ready
(x11)

kserve-ci-e2e-test

v1beta1Controllers

sklearn-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

sklearn-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-runtime-predictor-7b5dc59794

SuccessfulCreate

Created pod: isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-runtime-predictor-7b5dc59794 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

AddedInterface

Add eth0 [10.134.0.46/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Unhealthy

Readiness probe failed: Get "http://10.134.0.45:8080/v2/models/sklearn-v2-mlserver/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

sklearn-v2-mlserver-predictor-65d8664766-746bp

Unhealthy

Readiness probe failed: Get "https://10.134.0.45:8643/healthz": dial tcp 10.134.0.45:8643: connect: connection refused
(x3)

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Unhealthy

Readiness probe failed: dial tcp 10.134.0.46:8080: connect: connection refused

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Started

Started container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-runtime-predictor-6d84c876f4 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-runtime-predictor-6d84c876f4

SuccessfulCreate

Created pod: isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-runtime-predictor-7b5dc59794-c5rz5

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

AddedInterface

Add eth0 [10.134.0.47/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-runtime

InferenceServiceReady

InferenceService [isvc-sklearn-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Killing

Stopping container kserve-container

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-runtime-predictor

FailedGetScale

deployments.apps "isvc-sklearn-v2-runtime-predictor" not found

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-predictor-7c9dd679db from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-predictor-7c9dd679db

SuccessfulCreate

Created pod: isvc-sklearn-v2-predictor-7c9dd679db-q64wc

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

AddedInterface

Add eth0 [10.134.0.48/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Unhealthy

Readiness probe failed: Get "http://10.134.0.47:8080/v2/models/isvc-sklearn-v2-runtime/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Created

Created container: kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-runtime-predictor-6d84c876f4-884px

Unhealthy

Readiness probe failed: Get "https://10.134.0.47:8643/healthz": dial tcp 10.134.0.47:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

InferenceServiceReady

InferenceService [isvc-sklearn-v2] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-v2-mixed-predictor-5d8dfb54c

SuccessfulCreate

Created pod: isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-v2-mixed-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-v2-mixed-predictor-5d8dfb54c from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-v2-mixed": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-v2-mixed": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-v2-mixed-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

AddedInterface

Add eth0 [10.134.0.49/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Unhealthy

Readiness probe failed: Get "https://10.134.0.48:8643/healthz": dial tcp 10.134.0.48:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-predictor-7c9dd679db-q64wc

Unhealthy

Readiness probe failed: dial tcp 10.134.0.48:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Started

Started container kserve-container

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

InferenceServiceReady

InferenceService [isvc-sklearn-v2-mixed] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-v2-mixed

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-v2-mixed-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-predictor-6756f669d7 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-predictor-6756f669d7

SuccessfulCreate

Created pod: isvc-tensorflow-predictor-6756f669d7-24gzd

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-tensorflow-predictor-6756f669d7-24gzd

AddedInterface

Add eth0 [10.134.0.50/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Unhealthy

Readiness probe failed: Get "https://10.134.0.49:8643/healthz": dial tcp 10.134.0.49:8643: connect: connection refused
(x9)

kserve-ci-e2e-test

kubelet

isvc-sklearn-v2-mixed-predictor-5d8dfb54c-n42jg

Unhealthy

Readiness probe failed: dial tcp 10.134.0.49:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Pulling

Pulling image "tensorflow/serving:2.6.2"

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Pulled

Successfully pulled image "tensorflow/serving:2.6.2" in 3.699s (3.699s including waiting). Image size: 425873876 bytes.

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Created

Created container: kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Unhealthy

Readiness probe failed: dial tcp 10.134.0.50:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow

InferenceServiceReady

InferenceService [isvc-tensorflow] is Ready
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-tensorflow-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

AddedInterface

Add eth0 [10.134.0.51/23] from ovn-kubernetes

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-tensorflow-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-tensorflow-runtime-predictor-8699d78cf

SuccessfulCreate

Created pod: isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

kserve-ci-e2e-test

deployment-controller

isvc-tensorflow-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-tensorflow-runtime-predictor-8699d78cf from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Pulled

Container image "tensorflow/serving:2.6.2" already present on machine

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Started

Started container kserve-container
(x3)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Unhealthy

Readiness probe failed: dial tcp 10.134.0.51:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-tensorflow-runtime

InferenceServiceReady

InferenceService [isvc-tensorflow-runtime] is Ready
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-predictor-6756f669d7-24gzd

Unhealthy

Readiness probe failed: Get "https://10.134.0.50:8643/healthz": dial tcp 10.134.0.50:8643: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-tensorflow-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

UpdateFailed

Failed to update status for InferenceService "isvc-triton": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-triton": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-triton-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

isvc-triton-predictor

ScalingReplicaSet

Scaled up replica set isvc-triton-predictor-84bb65d94b from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-triton-predictor-84bb65d94b

SuccessfulCreate

Created pod: isvc-triton-predictor-84bb65d94b-5qstx

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Killing

Stopping container kserve-container

kserve-ci-e2e-test

multus

isvc-triton-predictor-84bb65d94b-5qstx

AddedInterface

Add eth0 [10.134.0.52/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Pulling

Pulling image "nvcr.io/nvidia/tritonserver:23.05-py3"
(x6)

kserve-ci-e2e-test

kubelet

isvc-tensorflow-runtime-predictor-8699d78cf-jb9rm

Unhealthy

Readiness probe failed: Get "https://10.134.0.51:8643/healthz": dial tcp 10.134.0.51:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Pulled

Successfully pulled image "nvcr.io/nvidia/tritonserver:23.05-py3" in 1m54.734s (1m54.734s including waiting). Image size: 12907074623 bytes.
(x8)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-triton-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Unhealthy

Readiness probe failed: dial tcp 10.134.0.52:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-triton

InferenceServiceReady

InferenceService [isvc-triton] is Ready

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-predictor-8689c4cfcc from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-predictor-8689c4cfcc

SuccessfulCreate

Created pod: isvc-xgboost-predictor-8689c4cfcc-p4qn9

kserve-ci-e2e-test

kubelet

isvc-triton-predictor-84bb65d94b-5qstx

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-predictor-8689c4cfcc-p4qn9

AddedInterface

Add eth0 [10.134.0.53/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Pulling

Pulling image "kserve/xgbserver:latest"

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Pulled

Successfully pulled image "kserve/xgbserver:latest" in 21.652s (21.652s including waiting). Image size: 1306417402 bytes.

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Started

Started container kube-rbac-proxy
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x4)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Unhealthy

Readiness probe failed: dial tcp 10.134.0.53:8080: connect: connection refused
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost

InferenceServiceReady

InferenceService [isvc-xgboost] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

multus

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

AddedInterface

Add eth0 [10.134.0.54/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-predictor-8689c4cfcc-p4qn9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Started

Started container storage-initializer

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-mlserver-predictor-67d4bc6646

SuccessfulCreate

Created pod: isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-mlserver-predictor-67d4bc6646 from 0 to 1
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

InferenceServiceReady

InferenceService [isvc-xgboost-v2-mlserver] is Ready
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "xgboost-v2-mlserver-predictor-serving-cert" not found

kserve-ci-e2e-test

deployment-controller

xgboost-v2-mlserver-predictor

ScalingReplicaSet

Scaled up replica set xgboost-v2-mlserver-predictor-7799869d6f from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

replicaset-controller

xgboost-v2-mlserver-predictor-7799869d6f

SuccessfulCreate

Created pod: xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Killing

Stopping container kserve-container
(x2)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

UpdateFailed

Failed to update status for InferenceService "xgboost-v2-mlserver": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "xgboost-v2-mlserver": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

AddedInterface

Add eth0 [10.134.0.55/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Unhealthy

Readiness probe failed: Get "http://10.134.0.54:8080/v2/models/isvc-xgboost-v2-mlserver/ready": dial tcp 10.134.0.54:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-mlserver-predictor-67d4bc6646-7g2mc

Unhealthy

Readiness probe failed: Get "https://10.134.0.54:8643/healthz": dial tcp 10.134.0.54:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine
(x13)

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

xgboost-v2-mlserver

InferenceServiceReady

InferenceService [xgboost-v2-mlserver] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

xgboost-v2-mlserver-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API

kserve-ci-e2e-test

multus

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

AddedInterface

Add eth0 [10.134.0.56/23] from ovn-kubernetes

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-runtime-predictor-779db84d9 from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-runtime-predictor-779db84d9

SuccessfulCreate

Created pod: isvc-xgboost-runtime-predictor-779db84d9-6nqmk

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Started

Started container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

xgboost-v2-mlserver-predictor-7799869d6f-vwxgn

Unhealthy

Readiness probe failed: Get "https://10.134.0.55:8643/healthz": dial tcp 10.134.0.55:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-runtime] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2-runtime": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2-runtime": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-runtime-predictor-6dc5954dc

SuccessfulCreate

Created pod: isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-runtime-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-runtime-predictor-6dc5954dc from 0 to 1
(x2)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-runtime-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Created

Created container: storage-initializer

kserve-ci-e2e-test

multus

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

AddedInterface

Add eth0 [10.134.0.57/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Unhealthy

Readiness probe failed: Get "https://10.134.0.56:8643/healthz": dial tcp 10.134.0.56:8643: connect: connection refused
(x8)

kserve-ci-e2e-test

kubelet

isvc-xgboost-runtime-predictor-779db84d9-6nqmk

Unhealthy

Readiness probe failed: dial tcp 10.134.0.56:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Pulled

Container image "docker.io/seldonio/mlserver:1.7.1" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 400
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x14)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2-runtime

InferenceServiceReady

InferenceService [isvc-xgboost-v2-runtime] is Ready

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

UpdateFailed

Failed to update status for InferenceService "isvc-xgboost-v2": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-xgboost-v2": the object has been modified; please apply your changes to the latest version and try again
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-runtime-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

replicaset-controller

isvc-xgboost-v2-predictor-6fcdd6977c

SuccessfulCreate

Created pod: isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

kserve-ci-e2e-test

deployment-controller

isvc-xgboost-v2-predictor

ScalingReplicaSet

Scaled up replica set isvc-xgboost-v2-predictor-6fcdd6977c from 0 to 1

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Killing

Stopping container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Started

Started container storage-initializer

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-xgboost-v2-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

multus

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

AddedInterface

Add eth0 [10.134.0.58/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Unhealthy

Readiness probe failed: Get "https://10.134.0.57:8643/healthz": dial tcp 10.134.0.57:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-runtime-predictor-6dc5954dc-nk56m

Unhealthy

Readiness probe failed: Get "http://10.134.0.57:8080/v2/models/isvc-xgboost-v2-runtime/ready": dial tcp 10.134.0.57:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Pulled

Container image "kserve/xgbserver:latest" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Created

Created container: kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-xgboost-v2

InferenceServiceReady

InferenceService [isvc-xgboost-v2] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-xgboost-v2-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Killing

Stopping container kserve-container

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-predictor-d954bcd99 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-predictor-d954bcd99

SuccessfulCreate

Created pod: isvc-sklearn-s3-predictor-d954bcd99-79lq9

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-predictor-serving-cert" not found

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-predictor-d954bcd99-79lq9

AddedInterface

Add eth0 [10.134.0.59/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Started

Started container kserve-container
(x9)

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Unhealthy

Readiness probe failed: dial tcp 10.134.0.58:8080: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-xgboost-v2-predictor-6fcdd6977c-4k8vt

Unhealthy

Readiness probe failed: Get "https://10.134.0.58:8643/healthz": dial tcp 10.134.0.58:8643: connect: connection refused

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Unhealthy

Readiness probe failed: dial tcp 10.134.0.59:8080: connect: connection refused
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3

InferenceServiceReady

InferenceService [isvc-sklearn-s3] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-predictor-d954bcd99-79lq9

Killing

Stopping container kserve-container

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

AddedInterface

Add eth0 [10.134.0.60/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Started

Started container storage-initializer

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InternalError

fails to update InferenceService status: Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Started

Started container kserve-container
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Unhealthy

Readiness probe failed: dial tcp 10.134.0.60:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-global-pass] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-global-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-pass-predictor-666f4c58c-2rgjc

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-global-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-global-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-global-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-global-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

AddedInterface

Add eth0 [10.134.0.61/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

Created

Created container: storage-initializer
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-global-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq

BackOff

Back-off restarting failed container storage-initializer in pod isvc-sklearn-s3-tls-global-fail-predictor-9c6dddd45-6bvqq_kserve-ci-e2e-test(dfa7b3af-7f5f-4e0d-bfc5-34fc4c2a10c8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Started

Started container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

AddedInterface

Add eth0 [10.134.0.62/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Started

Started container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Unhealthy

Readiness probe failed: dial tcp 10.134.0.62:8080: connect: connection refused

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-custom-pass] is Ready
(x12)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-custom-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-pass-predictor-7bd5b8d64c-wcjcp

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-custom-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-custom-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-custom-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45 from 0 to 1

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

AddedInterface

Add eth0 [10.134.0.63/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

Started

Started container storage-initializer
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-custom-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-custom-fail-predictor-58f5875f45-pnb8q

Killing

Stopping container storage-initializer

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-pass-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b from 0 to 1

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-pass": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-pass": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Started

Started container storage-initializer

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

AddedInterface

Add eth0 [10.134.0.64/23] from ovn-kubernetes

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Created

Created container: storage-initializer

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Started

Started container kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Pulled

Container image "quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1450" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Created

Created container: kserve-container

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Pulled

Container image "quay.io/opendatahub/odh-kube-auth-proxy@sha256:dcb09fbabd8811f0956ef612a0c9ddd5236804b9bd6548a0647d2b531c9d01b3" already present on machine

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Created

Created container: kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Started

Started container kube-rbac-proxy
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x2)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
(x8)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Unhealthy

Readiness probe failed: dial tcp 10.134.0.64:8080: connect: connection refused
(x13)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-pass

InferenceServiceReady

InferenceService [isvc-sklearn-s3-tls-serving-pass] is Ready
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedComputeMetricsReplicas

invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
(x3)

kserve-ci-e2e-test

horizontal-pod-autoscaler

isvc-sklearn-s3-tls-serving-pass-predictor

FailedGetResourceMetric

failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Killing

Stopping container kube-rbac-proxy

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-pass-predictor-654d8c6f8b-677px

Killing

Stopping container kserve-container

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

UpdateFailed

Failed to update status for InferenceService "isvc-sklearn-s3-tls-serving-fail": Operation cannot be fulfilled on inferenceservices.serving.kserve.io "isvc-sklearn-s3-tls-serving-fail": the object has been modified; please apply your changes to the latest version and try again

kserve-ci-e2e-test

deployment-controller

isvc-sklearn-s3-tls-serving-fail-predictor

ScalingReplicaSet

Scaled up replica set isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68 from 0 to 1

kserve-ci-e2e-test

replicaset-controller

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68

SuccessfulCreate

Created pod: isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "isvc-sklearn-s3-tls-serving-fail-predictor-serving-cert" not found

kserve-ci-e2e-test

multus

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

AddedInterface

Add eth0 [10.134.0.65/23] from ovn-kubernetes
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

Started

Started container storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

Created

Created container: storage-initializer
(x2)

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

Pulled

Container image "quay.io/opendatahub/kserve-storage-initializer@sha256:0dd2798f9cdbffc0563bd148612201df0e589ff8c26a6b19d321fd120fc5c097" already present on machine
(x11)

kserve-ci-e2e-test

v1beta1Controllers

isvc-sklearn-s3-tls-serving-fail

VirtualServiceCRDNotFound

Istio VirtualService CRD not present; VirtualService reconciliation skipped. If you do not use Istio, set ingress.disableIstioVirtualHost=true.

kserve-ci-e2e-test

kubelet

isvc-sklearn-s3-tls-serving-fail-predictor-86b6454c68-f66l2

Killing

Stopping container storage-initializer