Time Namespace Component RelatedObject Reason Message

openshift-multus

network-metrics-daemon-8hzhw

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-8hzhw to ip-10-0-141-167.ec2.internal

openshift-multus

multus-94cqk

Scheduled

Successfully assigned openshift-multus/multus-94cqk to ip-10-0-137-228.ec2.internal

openshift-ovn-kubernetes

ovnkube-node-42xf8

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-42xf8 to ip-10-0-137-228.ec2.internal

openshift-cluster-node-tuning-operator

tuned-85mbp

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-85mbp to ip-10-0-141-167.ec2.internal

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to ip-10-0-137-228.ec2.internal

openshift-insights

insights-runtime-extractor-j9pck

Scheduled

Successfully assigned openshift-insights/insights-runtime-extractor-j9pck to ip-10-0-134-217.ec2.internal

openshift-insights

insights-runtime-extractor-g58gj

Scheduled

Successfully assigned openshift-insights/insights-runtime-extractor-g58gj to ip-10-0-141-167.ec2.internal

openshift-cluster-node-tuning-operator

tuned-g4dbx

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-g4dbx to ip-10-0-134-217.ec2.internal

openshift-ingress

router-default-589c889464-99f7x

Scheduled

Successfully assigned openshift-ingress/router-default-589c889464-99f7x to ip-10-0-141-167.ec2.internal

openshift-multus

multus-xsl6l

Scheduled

Successfully assigned openshift-multus/multus-xsl6l to ip-10-0-141-167.ec2.internal

openshift-ovn-kubernetes

ovnkube-node-4bfdn

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-4bfdn to ip-10-0-141-167.ec2.internal

openshift-monitoring

cluster-monitoring-operator-75587bd455-6p57k

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-75587bd455-6p57k to ip-10-0-137-228.ec2.internal

openshift-network-operator

iptables-alerter-x467p

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-x467p to ip-10-0-134-217.ec2.internal

openshift-network-operator

iptables-alerter-pppj5

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-pppj5 to ip-10-0-141-167.ec2.internal

openshift-multus

multus-additional-cni-plugins-xrffc

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-xrffc to ip-10-0-134-217.ec2.internal

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-gjmkn to ip-10-0-137-228.ec2.internal

openshift-console-operator

console-operator-9d4b6777b-jztj7

Scheduled

Successfully assigned openshift-console-operator/console-operator-9d4b6777b-jztj7 to ip-10-0-137-228.ec2.internal

openshift-dns

dns-default-9thxk

Scheduled

Successfully assigned openshift-dns/dns-default-9thxk to ip-10-0-137-228.ec2.internal

openshift-network-operator

iptables-alerter-p49tx

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-p49tx to ip-10-0-137-228.ec2.internal

openshift-ovn-kubernetes

ovnkube-node-lt5hd

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-lt5hd to ip-10-0-134-217.ec2.internal

openshift-console

downloads-6bcc868b7-7knnb

Scheduled

Successfully assigned openshift-console/downloads-6bcc868b7-7knnb to ip-10-0-137-228.ec2.internal

openshift-kube-storage-version-migrator

migrator-74bb7799d9-ngnjp

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-74bb7799d9-ngnjp to ip-10-0-141-167.ec2.internal

openshift-jobset-operator

jobset-controller-manager-5d86bd95b-82mcg

Scheduled

Successfully assigned openshift-jobset-operator/jobset-controller-manager-5d86bd95b-82mcg to ip-10-0-141-167.ec2.internal

openshift-dns

dns-default-cmsst

Scheduled

Successfully assigned openshift-dns/dns-default-cmsst to ip-10-0-141-167.ec2.internal

openshift-jobset-operator

jobset-operator-747c5859c7-jjsvm

Scheduled

Successfully assigned openshift-jobset-operator/jobset-operator-747c5859c7-jjsvm to ip-10-0-141-167.ec2.internal

openshift-multus

multus-additional-cni-plugins-sg7kx

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-sg7kx to ip-10-0-137-228.ec2.internal

openshift-network-diagnostics

network-check-target-w22tj

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-w22tj to ip-10-0-141-167.ec2.internal

openshift-multus

multus-additional-cni-plugins-2scbl

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-2scbl to ip-10-0-141-167.ec2.internal

openshift-console

console-5575fcffc4-cjbgc

Scheduled

Successfully assigned openshift-console/console-5575fcffc4-cjbgc to ip-10-0-137-228.ec2.internal

openshift-dns

dns-default-rb7d6

Scheduled

Successfully assigned openshift-dns/dns-default-rb7d6 to ip-10-0-134-217.ec2.internal

openshift-multus

multus-4rqkv

Scheduled

Successfully assigned openshift-multus/multus-4rqkv to ip-10-0-134-217.ec2.internal

openshift-cluster-node-tuning-operator

tuned-s5jjg

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-s5jjg to ip-10-0-137-228.ec2.internal

openshift-cluster-csi-drivers

aws-ebs-csi-driver-node-zhm4z

Scheduled

Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zhm4z to ip-10-0-137-228.ec2.internal

default

apiserver

kube-system

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

default

apiserver

kube-system

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 15s finished

default

apiserver

kube-system

TerminationStoppedServing

Server has stopped listening

default

apiserver

kube-system

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

kube-system

TerminationGracefulTerminationFinished

All pending requests processed

openshift-monitoring

telemeter-client-5f5f55ddc7-66h44

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-5f5f55ddc7-66h44 to ip-10-0-137-228.ec2.internal

openshift-dns

node-resolver-4sxvb

Scheduled

Successfully assigned openshift-dns/node-resolver-4sxvb to ip-10-0-137-228.ec2.internal

openshift-dns

node-resolver-hqq4l

Scheduled

Successfully assigned openshift-dns/node-resolver-hqq4l to ip-10-0-134-217.ec2.internal

kube-system

global-pull-secret-syncer-6fwnt

Scheduled

Successfully assigned kube-system/global-pull-secret-syncer-6fwnt to ip-10-0-134-217.ec2.internal

openshift-monitoring

prometheus-operator-admission-webhook-57cf98b594-mdwnn

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-mdwnn to ip-10-0-141-167.ec2.internal

openshift-monitoring

prometheus-operator-5676c8c784-qmc6p

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-5676c8c784-qmc6p to ip-10-0-137-228.ec2.internal

openshift-dns

node-resolver-p6k5v

Scheduled

Successfully assigned openshift-dns/node-resolver-p6k5v to ip-10-0-141-167.ec2.internal

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to ip-10-0-137-228.ec2.internal

openshift-monitoring

openshift-state-metrics-9d44df66c-l7k5k

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-9d44df66c-l7k5k to ip-10-0-137-228.ec2.internal

openshift-monitoring

node-exporter-hbqsr

Scheduled

Successfully assigned openshift-monitoring/node-exporter-hbqsr to ip-10-0-141-167.ec2.internal

openshift-monitoring

node-exporter-75wkc

Scheduled

Successfully assigned openshift-monitoring/node-exporter-75wkc to ip-10-0-137-228.ec2.internal

kube-system

global-pull-secret-syncer-76ngx

Scheduled

Successfully assigned kube-system/global-pull-secret-syncer-76ngx to ip-10-0-141-167.ec2.internal

openshift-monitoring

node-exporter-295ld

Scheduled

Successfully assigned openshift-monitoring/node-exporter-295ld to ip-10-0-134-217.ec2.internal

openshift-monitoring

monitoring-plugin-7dccd58f55-r9cgr

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-7dccd58f55-r9cgr to ip-10-0-137-228.ec2.internal

openshift-image-registry

image-registry-66f5f8d5cd-rgqhw

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-insights

insights-runtime-extractor-2l9ld

Scheduled

Successfully assigned openshift-insights/insights-runtime-extractor-2l9ld to ip-10-0-137-228.ec2.internal

openshift-cluster-csi-drivers

aws-ebs-csi-driver-node-kmhfn

Scheduled

Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-kmhfn to ip-10-0-141-167.ec2.internal

openshift-cluster-storage-operator

volume-data-source-validator-7c6cbb6c87-m7bf5

Scheduled

Successfully assigned openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-m7bf5 to ip-10-0-137-228.ec2.internal

kube-system

global-pull-secret-syncer-t86n6

Scheduled

Successfully assigned kube-system/global-pull-secret-syncer-t86n6 to ip-10-0-137-228.ec2.internal

openshift-ingress-canary

ingress-canary-4lslj

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-4lslj to ip-10-0-134-217.ec2.internal

openshift-monitoring

metrics-server-576679f874-8p2ck

Scheduled

Successfully assigned openshift-monitoring/metrics-server-576679f874-8p2ck to ip-10-0-141-167.ec2.internal

openshift-cluster-samples-operator

cluster-samples-operator-6dc5bdb6b4-qm2z5

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-qm2z5 to ip-10-0-141-167.ec2.internal

openshift-multus

network-metrics-daemon-9nk69

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-9nk69 to ip-10-0-137-228.ec2.internal

openshift-monitoring

thanos-querier-79b5647b94-kphgj

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-79b5647b94-kphgj to ip-10-0-141-167.ec2.internal

openshift-insights

insights-operator-585dfdc468-w75nz

Scheduled

Successfully assigned openshift-insights/insights-operator-585dfdc468-w75nz to ip-10-0-137-228.ec2.internal

openshift-image-registry

image-registry-66f5f8d5cd-rgqhw

Scheduled

Successfully assigned openshift-image-registry/image-registry-66f5f8d5cd-rgqhw to ip-10-0-134-217.ec2.internal

openshift-multus

network-metrics-daemon-b6hrq

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-b6hrq to ip-10-0-134-217.ec2.internal

openshift-monitoring

kube-state-metrics-69db897b98-g982r

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-69db897b98-g982r to ip-10-0-141-167.ec2.internal

kube-system

konnectivity-agent-7lh45

Scheduled

Successfully assigned kube-system/konnectivity-agent-7lh45 to ip-10-0-141-167.ec2.internal

openshift-network-diagnostics

network-check-target-j6s9c

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-j6s9c to ip-10-0-134-217.ec2.internal

openshift-ingress-canary

ingress-canary-nlf5r

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-nlf5r to ip-10-0-137-228.ec2.internal

openshift-cluster-csi-drivers

aws-ebs-csi-driver-node-gqldx

Scheduled

Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-gqldx to ip-10-0-134-217.ec2.internal

openshift-network-console

networking-console-plugin-cb95c66f6-7htkt

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-cb95c66f6-7htkt to ip-10-0-137-228.ec2.internal

kube-system

konnectivity-agent-p49zs

Scheduled

Successfully assigned kube-system/konnectivity-agent-p49zs to ip-10-0-137-228.ec2.internal

openshift-image-registry

image-registry-6fd4d896fc-ltlnc

Scheduled

Successfully assigned openshift-image-registry/image-registry-6fd4d896fc-ltlnc to ip-10-0-137-228.ec2.internal

openshift-ingress-canary

ingress-canary-ffzqh

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-ffzqh to ip-10-0-141-167.ec2.internal

openshift-console

console-74748b6745-5hk4w

Scheduled

Successfully assigned openshift-console/console-74748b6745-5hk4w to ip-10-0-137-228.ec2.internal

openshift-console

console-9bc45bfd4-mxvcv

Scheduled

Successfully assigned openshift-console/console-9bc45bfd4-mxvcv to ip-10-0-137-228.ec2.internal

kube-system

konnectivity-agent-qxtv4

Scheduled

Successfully assigned kube-system/konnectivity-agent-qxtv4 to ip-10-0-134-217.ec2.internal

openshift-network-diagnostics

network-check-target-bvrrk

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-bvrrk to ip-10-0-137-228.ec2.internal

openshift-service-ca-operator

service-ca-operator-d6fc45fc5-f2jrk

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-d6fc45fc5-f2jrk to ip-10-0-141-167.ec2.internal

openshift-image-registry

node-ca-vrpzt

Scheduled

Successfully assigned openshift-image-registry/node-ca-vrpzt to ip-10-0-137-228.ec2.internal

openshift-image-registry

node-ca-tqshj

Scheduled

Successfully assigned openshift-image-registry/node-ca-tqshj to ip-10-0-141-167.ec2.internal

openshift-image-registry

image-registry-f6dccbfd7-k4p6p

Scheduled

Successfully assigned openshift-image-registry/image-registry-f6dccbfd7-k4p6p to ip-10-0-141-167.ec2.internal

openshift-image-registry

node-ca-rw9wr

Scheduled

Successfully assigned openshift-image-registry/node-ca-rw9wr to ip-10-0-134-217.ec2.internal

openshift-console

console-755cd4b745-k4bj5

Scheduled

Successfully assigned openshift-console/console-755cd4b745-k4bj5 to ip-10-0-137-228.ec2.internal

openshift-console

console-758f9c8856-gpqgw

Scheduled

Successfully assigned openshift-console/console-758f9c8856-gpqgw to ip-10-0-137-228.ec2.internal

openshift-service-ca

service-ca-865cb79987-fj94h

Scheduled

Successfully assigned openshift-service-ca/service-ca-865cb79987-fj94h to ip-10-0-141-167.ec2.internal

openshift-network-diagnostics

network-check-source-8894fc9bd-g7h5c

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-8894fc9bd-g7h5c to ip-10-0-137-228.ec2.internal

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

default-scheduler

kube-scheduler

LeaderElection

kube-scheduler-67487494f8-hqjkr_9bbe41c8-ed56-454a-b8e8-d4af2c07d2fe became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

kube-controller-manager-5d4757858f-4nfl7_3109ae67-73dc-4f90-ad39-a111131a5a8d became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

cluster-version-operator-ddb4b78c4-7v2rx_62442d32-9a61-4a22-9e48-728ef4405c07 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.20.19" image="quay.io/openshift-release-dev/ocp-release@sha256:67dd1e75af12ace38763eaa96c9c9911fc1cb11b9cd2c961d805b0a987b30b52"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.20.19" image="quay.io/openshift-release-dev/ocp-release@sha256:67dd1e75af12ace38763eaa96c9c9911fc1cb11b9cd2c961d805b0a987b30b52"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.20.19" image="quay.io/openshift-release-dev/ocp-release@sha256:67dd1e75af12ace38763eaa96c9c9911fc1cb11b9cd2c961d805b0a987b30b52" architecture="Multi"

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found")

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

storage

aws-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

storage-VolumeDataSourceValidatorStarter-volumedatasourcevalidatorstaticcontroller-volumedatasourcevalidatorstaticcontroller-staticresources

aws-cloud-controller-manager

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/volume-data-source-validator because it was missing

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-storage-operator

storage-VolumeDataSourceValidatorStarter-volumedatasourcevalidatorstaticcontroller-volumedatasourcevalidatorstaticcontroller-staticresources

aws-cloud-controller-manager

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/volume-data-source-validator because it was missing

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.20.19"}]

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to True ("DefaultStorageClassControllerProgressing: infrastructure.config.openshift.io \"cluster\" not found"),Available changed from Unknown to False ("DefaultStorageClassControllerAvailable: infrastructure.config.openshift.io \"cluster\" not found")

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "CSIDriverStarterDegraded: infrastructure.config.openshift.io \"cluster\" not found"

openshift-cluster-storage-operator

storage-VolumeDataSourceValidatorStarter-volumedatasourcevalidatorstaticcontroller-volumedatasourcevalidatorstaticcontroller-staticresources

aws-cloud-controller-manager

ServiceAccountCreated

Created ServiceAccount/volume-data-source-validator -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.20.19"

openshift-cluster-storage-operator

storage-VolumeDataSourceValidatorStarter-volumedatasourcevalidatorstaticcontroller-volumedatasourcevalidatorstaticcontroller-staticresources

aws-cloud-controller-manager

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumepopulators.populator.storage.k8s.io because it was missing

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded message changed from "CSIDriverStarterDegraded: infrastructure.config.openshift.io \"cluster\" not found" to "CSIDriverStarterDegraded: infrastructure.config.openshift.io \"cluster\" not found\nDefaultStorageClassControllerDegraded: infrastructure.config.openshift.io \"cluster\" not found"
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.20.19"

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("CSISnapshotControllerProgressing: Deployment is not progressing"),Available changed from False to True ("CSISnapshotControllerAvailable: Deployment is available"),status.versions changed from [] to [{"operator" "4.20.19"} {"csi-snapshot-controller" "4.20.19"}]
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.20.19"

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-657985985d-5s5bj

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-657985985d-5s5bj became leader

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded message changed from "DefaultStorageClassControllerDegraded: infrastructure.config.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded message changed from "CSIDriverStarterDegraded: infrastructure.config.openshift.io \"cluster\" not found\nDefaultStorageClassControllerDegraded: infrastructure.config.openshift.io \"cluster\" not found" to "DefaultStorageClassControllerDegraded: infrastructure.config.openshift.io \"cluster\" not found"

openshift-cluster-storage-operator

storage-VolumeDataSourceValidatorStarter-volumedatasourcevalidatordeploymentcontroller-deployment-controller--volumedatasourcevalidatordeploymentcontroller

aws-cloud-controller-manager

DeploymentCreated

Created Deployment.apps/volume-data-source-validator -n openshift-cluster-storage-operator because it was missing
(x14)

openshift-cluster-storage-operator

storage-config-observer-controller--config-observer-configobserver

aws-cloud-controller-manager

ObserveProxyConfig

proxy.config.openshift.io/cluster not found

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatormgmtstaticcontroller-awsebscsidriveroperatormgmtstaticcontroller-staticresources

aws-cloud-controller-manager

ServiceAccountCreated

Created ServiceAccount/aws-ebs-csi-driver-operator -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatormgmtstaticcontroller-awsebscsidriveroperatormgmtstaticcontroller-staticresources

aws-cloud-controller-manager

RoleCreated

Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-role -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-AWSEBS

aws-cloud-controller-manager

DeploymentCreated

Created Deployment.apps/aws-ebs-csi-driver-operator -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatorstaticcontroller-awsebscsidriveroperatorstaticcontroller-staticresources

aws-cloud-controller-manager

ServiceAccountCreated

Created ServiceAccount/aws-ebs-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-AWSEBS

aws-cloud-controller-manager

ClusterCSIDriverCreated

Created ClusterCSIDriver.operator.openshift.io/ebs.csi.aws.com -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-696666cd89-2tl7q_75691b68-12e2-4ac1-8c68-719a007e20b4

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-696666cd89-2tl7q_75691b68-12e2-4ac1-8c68-719a007e20b4 became leader

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatormgmtstaticcontroller-awsebscsidriveroperatormgmtstaticcontroller-staticresources

aws-cloud-controller-manager

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-rolebinding -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatorstaticcontroller-awsebscsidriveroperatorstaticcontroller-staticresources

aws-cloud-controller-manager

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-rolebinding -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatorstaticcontroller-awsebscsidriveroperatorstaticcontroller-staticresources

aws-cloud-controller-manager

RoleCreated

Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-role -n openshift-cluster-csi-drivers because it was missing

openshift-image-registry

image-registry-operator

openshift-image-registry

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatorstaticcontroller-awsebscsidriveroperatorstaticcontroller-staticresources

aws-cloud-controller-manager

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-clusterrole because it was missing

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-c7b96d85-m88c5_3a23b3e9-a10f-4397-9659-a0b30d237f6a became leader

openshift-cluster-storage-operator

storage-CSIDriverStarter-awsebscsidriveroperatorstaticcontroller-awsebscsidriveroperatorstaticcontroller-staticresources

aws-cloud-controller-manager

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-clusterrolebinding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator

aws-ebs-csi-driver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrolplanestaticresourcescontroller-awsebsdrivercontrolplanestaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ServiceCreated

Created Service/aws-ebs-csi-driver-controller-metrics -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-main-provisioner-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-csi-config-observer-controller-awsebsdrivercsiconfigobservercontroller-config-observer-configobserver

aws-ebs-csi-driver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-csi-config-observer-controller-awsebsdrivercsiconfigobservercontroller-config-observer-configobserver

aws-ebs-csi-driver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "targetcsiconfig": map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM"...), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + }, }

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator

aws-ebs-csi-driver-operator

StorageClassCreated

Created StorageClass.storage.k8s.io/gp2-csi because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrolplanestaticresourcescontroller-awsebsdrivercontrolplanestaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/aws-ebs-csi-driver-controller-pdb -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator

aws-ebs-csi-driver-operator

StorageClassCreated

Created StorageClass.storage.k8s.io/gp3-csi because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrolplanestaticresourcescontroller-awsebsdrivercontrolplanestaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ServiceAccountCreated

Created ServiceAccount/aws-ebs-csi-driver-controller-sa -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-75587bd455 from 0 to 1

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-csi-driver-node-service_awsebsdrivernodeservicecontroller-awsebsdrivernodeservicecontroller

aws-ebs-csi-driver-operator

DaemonSetCreated

Created DaemonSet.apps/aws-ebs-csi-driver-node -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/ebs-kube-rbac-proxy-role because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrollerservicecontroller-deployment-controller--awsebsdrivercontrollerservicecontroller

aws-ebs-csi-driver-operator

DeploymentCreated

Created Deployment.apps/aws-ebs-csi-driver-controller -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrollerservicecontroller-deployment-controller--awsebsdrivercontrollerservicecontroller

aws-ebs-csi-driver-operator

DeploymentUpdated

Updated Deployment.apps/aws-ebs-csi-driver-controller -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it changed

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-csi-config-observer-controller-awsebsdrivercsiconfigobservercontroller-config-observer-configobserver

aws-ebs-csi-driver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivercontrolplanestaticresourcescontroller-awsebsdrivercontrolplanestaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ConfigMapCreated

Created ConfigMap/aws-ebs-csi-driver-trusted-ca-bundle -n clusters-db24fc8e-a688-4988-83ac-51abadbf06a4 because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/ebs-privileged-role because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-main-snapshotter-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-main-attacher-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-storageclass-reader-resizer-binding because it was missing

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-image-registry

image-registry-operator

openshift-image-registry

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-volumesnapshot-reader-provisioner-binding because it was missing

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-main-resizer-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-kube-rbac-proxy-binding because it was missing

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-589c889464 from 0 to 1

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-node-privileged-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-volumeattributesclass-reader-resizer-binding because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

CSIDriverCreated

Created CSIDriver.storage.k8s.io/ebs.csi.aws.com because it was missing

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-585dfdc468 from 0 to 1

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ServiceAccountCreated

Created ServiceAccount/aws-ebs-csi-driver-node-sa -n openshift-cluster-csi-drivers because it was missing

openshift-cluster-csi-drivers

aws-ebs-csi-driver-operator-awsebsdrivergueststaticresourcescontroller-awsebsdrivergueststaticresourcescontroller-staticresources

aws-ebs-csi-driver-operator

ClusterRoleBindingCreated

(combined from similar events): Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-volumeattributesclass-reader-provisioner-binding because it was missing

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

cluster-network-operator-85c96df97-7gtx6_572e657f-b1bd-4946-98ce-3426961e6ffe became leader

openshift-cluster-storage-operator

deployment-controller

volume-data-source-validator

ReplicaSetCreateError

Failed to create new replica set "volume-data-source-validator-7c6cbb6c87": Internal error occurred: admission plugin "PodSecurity" failed to complete validation in 13s

openshift-network-operator

network-operator

openshift-network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-csi-drivers

ebs.csi.aws.com/1776870271999-2456-ebs.csi.aws.com

ebs-csi-aws-com

LeaderElection

1776870271999-2456-ebs-csi-aws-com became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

openshift-controller-manager-67ffccbd7b-nwtmc_b8b00da6-cc0c-487f-a824-199b66c7022a became leader

openshift-cluster-csi-drivers

external-attacher-leader-ebs.csi.aws.com/aws-ebs-csi-driver-controller-76cff676d6-wk4wj

external-attacher-leader-ebs-csi-aws-com

LeaderElection

aws-ebs-csi-driver-controller-76cff676d6-wk4wj became leader

openshift-cluster-csi-drivers

external-snapshotter-leader-ebs.csi.aws.com/aws-ebs-csi-driver-controller-76cff676d6-wk4wj

external-snapshotter-leader-ebs-csi-aws-com

LeaderElection

aws-ebs-csi-driver-controller-76cff676d6-wk4wj became leader

openshift-cluster-csi-drivers

external-resizer-ebs-csi-aws-com/aws-ebs-csi-driver-controller-76cff676d6-wk4wj

external-resizer-ebs-csi-aws-com

LeaderElection

aws-ebs-csi-driver-controller-76cff676d6-wk4wj became leader

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ReplicaSetCreateError

Failed to create new replica set "kube-storage-version-migrator-operator-6769c5d45": Internal error occurred: admission plugin "PodSecurity" failed to complete validation in 13s

openshift-service-ca-operator

deployment-controller

service-ca-operator

ReplicaSetCreateError

Failed to create new replica set "service-ca-operator-d6fc45fc5": Internal error occurred: admission plugin "PodSecurity" failed to complete validation in 13s

openshift-ingress

service-controller

router-default

EnsuringLoadBalancer

Ensuring load balancer

openshift-ingress

service-controller

router-default

UnAvailableLoadBalancer

There are no available nodes for LoadBalancer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

cluster-policy-controller-867bc84fdd-rzvwc_7e3acd90-a92d-4a2d-b637-813cb8cd608e became leader

openshift-cluster-storage-operator

deployment-controller

volume-data-source-validator

ScalingReplicaSet

Scaled up replica set volume-data-source-validator-7c6cbb6c87 from 0 to 1

openshift-ingress

service-controller

router-default

EnsuredLoadBalancer

Ensured load balancer

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for kube-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for default namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for kube-public namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace
(x13)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-75587bd455

FailedCreate

Error creating: pods "cluster-monitoring-operator-75587bd455-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-6dc5bdb6b4 from 0 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-config namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-d6fc45fc5 from 0 to 1

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-9d4b6777b from 0 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-6769c5d45 from 0 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

openshift-image-registry

deployment-controller

image-registry

ScalingReplicaSet

Scaled up replica set image-registry-6fd4d896fc from 0 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-insights namespace
(x13)

openshift-ingress

replicaset-controller

router-default-589c889464

FailedCreate

Error creating: pods "router-default-589c889464-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-image-registry

image-registry-operator

openshift-image-registry

DeploymentCreated

Created Deployment.apps/image-registry -n openshift-image-registry because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace
(x2)

openshift-image-registry

controllermanager

image-registry

NoPods

No matching pods found

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace
(x13)

openshift-insights

replicaset-controller

insights-operator-585dfdc468

FailedCreate

Error creating: pods "insights-operator-585dfdc468-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-6dc5bdb6b4

FailedCreate

Error creating: pods "cluster-samples-operator-6dc5bdb6b4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace
(x12)

openshift-cluster-storage-operator

replicaset-controller

volume-data-source-validator-7c6cbb6c87

FailedCreate

Error creating: pods "volume-data-source-validator-7c6cbb6c87-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace
(x11)

openshift-console-operator

replicaset-controller

console-operator-9d4b6777b

FailedCreate

Error creating: pods "console-operator-9d4b6777b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-6769c5d45

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-6769c5d45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-image-registry

replicaset-controller

image-registry-6fd4d896fc

FailedCreate

Error creating: pods "image-registry-6fd4d896fc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-d6fc45fc5

FailedCreate

Error creating: pods "service-ca-operator-d6fc45fc5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-6954bb965f-prm8g became leader

openshift-network-node-identity

network-node-identity-6c74646c9-zh4jw_aaa47194-f98e-4974-a489-f81a245ab38c

ovnkube-identity

LeaderElection

network-node-identity-6c74646c9-zh4jw_aaa47194-f98e-4974-a489-f81a245ab38c became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for open-cluster-management-agent-addon namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for open-cluster-management-db24fc8e-a688-4988-83ac-51abadbf0 namespace

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-8894fc9bd from 0 to 1

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-image-registry

controllermanager

image-registry

NoPods

No matching pods found

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

kube-controller-manager-54d97cdc67-djw8t_034c0164-23d1-4380-a920-43203e968cc2 became leader

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-cb95c66f6 from 0 to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace
(x16)

openshift-network-diagnostics

replicaset-controller

network-check-source-8894fc9bd

FailedCreate

Error creating: pods "network-check-source-8894fc9bd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-6dc5bdb6b4

FailedCreate

Error creating: pods "cluster-samples-operator-6dc5bdb6b4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-75587bd455

FailedCreate

Error creating: pods "cluster-monitoring-operator-75587bd455-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-d6fc45fc5

FailedCreate

Error creating: pods "service-ca-operator-d6fc45fc5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-cluster-storage-operator

replicaset-controller

volume-data-source-validator-7c6cbb6c87

FailedCreate

Error creating: pods "volume-data-source-validator-7c6cbb6c87-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-image-registry

replicaset-controller

image-registry-6fd4d896fc

FailedCreate

Error creating: pods "image-registry-6fd4d896fc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-ingress

replicaset-controller

router-default-589c889464

FailedCreate

Error creating: pods "router-default-589c889464-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-6769c5d45

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-6769c5d45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-insights

replicaset-controller

insights-operator-585dfdc468

FailedCreate

Error creating: pods "insights-operator-585dfdc468-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-console-operator

replicaset-controller

console-operator-9d4b6777b

FailedCreate

Error creating: pods "console-operator-9d4b6777b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x16)

openshift-network-console

replicaset-controller

networking-console-plugin-cb95c66f6

FailedCreate

Error creating: pods "networking-console-plugin-cb95c66f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-lt5hd

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-4rqkv

default

cloud-node-controller

ip-10-0-134-217.ec2.internal

Synced

Node synced successfully

default

node-controller

ip-10-0-134-217.ec2.internal

RegisteredNode

Node ip-10-0-134-217.ec2.internal event: Registered Node ip-10-0-134-217.ec2.internal in Controller
(x6)

default

kubelet

ip-10-0-134-217.ec2.internal

NodeHasSufficientPID

Node ip-10-0-134-217.ec2.internal status is now: NodeHasSufficientPID

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-g4dbx

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-j6s9c

default

kubelet

ip-10-0-134-217.ec2.internal

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-x467p

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-hqq4l
(x6)

default

kubelet

ip-10-0-134-217.ec2.internal

NodeHasNoDiskPressure

Node ip-10-0-134-217.ec2.internal status is now: NodeHasNoDiskPressure

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-xrffc

kube-system

daemonset-controller

konnectivity-agent

SuccessfulCreate

Created pod: konnectivity-agent-qxtv4
(x6)

default

kubelet

ip-10-0-134-217.ec2.internal

NodeHasSufficientMemory

Node ip-10-0-134-217.ec2.internal status is now: NodeHasSufficientMemory

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-rw9wr

openshift-cluster-csi-drivers

daemonset-controller

aws-ebs-csi-driver-node

SuccessfulCreate

Created pod: aws-ebs-csi-driver-node-gqldx

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-b6hrq

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-134-217.ec2.internal

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421"

openshift-cluster-node-tuning-operator

kubelet

tuned-g4dbx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0"

openshift-image-registry

kubelet

node-ca-rw9wr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9"

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c"

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-134-217.ec2.internal

Started

Started container haproxy

openshift-network-operator

kubelet

iptables-alerter-x467p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

kube-system

kubelet

konnectivity-agent-qxtv4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64"

openshift-dns

kubelet

node-resolver-hqq4l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-134-217.ec2.internal

Created

Created container: haproxy

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-134-217.ec2.internal

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421" in 1.705s (1.705s including waiting). Image size: 488332864 bytes.

openshift-multus

kubelet

multus-4rqkv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a"

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-p49tx

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-s5jjg

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-vrpzt

openshift-cluster-csi-drivers

daemonset-controller

aws-ebs-csi-driver-node

SuccessfulCreate

Created pod: aws-ebs-csi-driver-node-zhm4z

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-42xf8
(x6)

default

kubelet

ip-10-0-137-228.ec2.internal

NodeHasSufficientMemory

Node ip-10-0-137-228.ec2.internal status is now: NodeHasSufficientMemory

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-94cqk

default

kubelet

ip-10-0-137-228.ec2.internal

NodeAllocatableEnforced

Updated Node Allocatable limit across pods
(x6)

default

kubelet

ip-10-0-137-228.ec2.internal

NodeHasNoDiskPressure

Node ip-10-0-137-228.ec2.internal status is now: NodeHasNoDiskPressure
(x6)

default

kubelet

ip-10-0-137-228.ec2.internal

NodeHasSufficientPID

Node ip-10-0-137-228.ec2.internal status is now: NodeHasSufficientPID

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-9nk69

kube-system

daemonset-controller

konnectivity-agent

SuccessfulCreate

Created pod: konnectivity-agent-p49zs

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-bvrrk

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-4sxvb

kube-system

daemonset-controller

global-pull-secret-syncer

SuccessfulCreate

Created pod: global-pull-secret-syncer-6fwnt

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-137-228.ec2.internal

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421"

default

cloud-node-controller

ip-10-0-137-228.ec2.internal

Synced

Node synced successfully

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-sg7kx

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0"

openshift-multus

kubelet

multus-94cqk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a"

kube-system

kubelet

konnectivity-agent-p49zs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64"

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-137-228.ec2.internal

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421" in 1.709s (1.709s including waiting). Image size: 488332864 bytes.

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-137-228.ec2.internal

Created

Created container: haproxy

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-137-228.ec2.internal

Started

Started container haproxy

openshift-image-registry

kubelet

node-ca-vrpzt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9"

openshift-dns

kubelet

node-resolver-4sxvb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

openshift-network-operator

kubelet

iptables-alerter-p49tx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

openshift-cluster-node-tuning-operator

kubelet

tuned-s5jjg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d"

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0"

openshift-network-operator

kubelet

iptables-alerter-x467p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 17.578s (17.578s including waiting). Image size: 534708291 bytes.

openshift-image-registry

kubelet

node-ca-rw9wr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" in 17.576s (17.576s including waiting). Image size: 480736321 bytes.

openshift-dns

kubelet

node-resolver-hqq4l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 17.575s (17.575s including waiting). Image size: 534708291 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c" in 17.591s (17.591s including waiting). Image size: 533474192 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" in 17.659s (17.659s including waiting). Image size: 1592330346 bytes.

default

node-controller

ip-10-0-137-228.ec2.internal

RegisteredNode

Node ip-10-0-137-228.ec2.internal event: Registered Node ip-10-0-137-228.ec2.internal in Controller

openshift-multus

kubelet

multus-4rqkv

Started

Started container kube-multus

openshift-multus

kubelet

multus-4rqkv

Created

Created container: kube-multus

openshift-multus

kubelet

multus-4rqkv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" in 17.599s (17.599s including waiting). Image size: 1267137864 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-g4dbx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d" in 17.577s (17.577s including waiting). Image size: 701151772 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-g4dbx

Created

Created container: tuned

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0" in 17.579s (17.579s including waiting). Image size: 514965743 bytes.

kube-system

kubelet

konnectivity-agent-qxtv4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64" in 17.249s (17.249s including waiting). Image size: 474198918 bytes.

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container kube-rbac-proxy-node

openshift-dns

kubelet

node-resolver-hqq4l

Created

Created container: dns-node-resolver

openshift-image-registry

kubelet

node-ca-rw9wr

Started

Started container node-ca

openshift-image-registry

kubelet

node-ca-rw9wr

Created

Created container: node-ca

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container ovn-acl-logging

kube-system

kubelet

konnectivity-agent-qxtv4

Created

Created container: konnectivity-agent

kube-system

kubelet

konnectivity-agent-qxtv4

Started

Started container konnectivity-agent

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: egress-router-binary-copy

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Created

Created container: csi-driver

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Started

Started container csi-driver

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: kube-rbac-proxy-node

openshift-dns

kubelet

node-resolver-hqq4l

Started

Started container dns-node-resolver

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01" in 726ms (726ms including waiting). Image size: 426505480 bytes.

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Created

Created container: csi-node-driver-registrar

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Started

Started container csi-node-driver-registrar

openshift-cluster-node-tuning-operator

kubelet

tuned-g4dbx

Started

Started container tuned

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-image-registry

deployment-controller

image-registry

ScalingReplicaSet

Scaled up replica set image-registry-66f5f8d5cd from 0 to 1

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-image-registry

replicaset-controller

image-registry-66f5f8d5cd

SuccessfulCreate

Created pod: image-registry-66f5f8d5cd-rgqhw

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: nbdb

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container nbdb

openshift-network-operator

kubelet

iptables-alerter-x467p

Started

Started container iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-x467p

Created

Created container: iptables-alerter

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068" in 1.022s (1.022s including waiting). Image size: 426337527 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Started

Started container csi-liveness-probe

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-gqldx

Created

Created container: csi-liveness-probe

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8" in 4.292s (4.292s including waiting). Image size: 727300480 bytes.

kube-system

daemonset-controller

konnectivity-agent

SuccessfulCreate

Created pod: konnectivity-agent-7lh45

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-xsl6l

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-tqshj

default

ovnkube-csr-approver-controller

csr-w7nn8

CSRApproved

CSR "csr-w7nn8" has been approved

default

kubelet

ip-10-0-141-167.ec2.internal

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-2scbl

openshift-cluster-csi-drivers

daemonset-controller

aws-ebs-csi-driver-node

SuccessfulCreate

Created pod: aws-ebs-csi-driver-node-kmhfn
(x6)

default

kubelet

ip-10-0-141-167.ec2.internal

NodeHasSufficientPID

Node ip-10-0-141-167.ec2.internal status is now: NodeHasSufficientPID

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-pppj5

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Created

Created container: ovnkube-controller
(x6)

default

kubelet

ip-10-0-141-167.ec2.internal

NodeHasSufficientMemory

Node ip-10-0-141-167.ec2.internal status is now: NodeHasSufficientMemory

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-85mbp
(x6)

default

kubelet

ip-10-0-141-167.ec2.internal

NodeHasNoDiskPressure

Node ip-10-0-141-167.ec2.internal status is now: NodeHasNoDiskPressure

default

node-controller

ip-10-0-141-167.ec2.internal

RegisteredNode

Node ip-10-0-141-167.ec2.internal event: Registered Node ip-10-0-141-167.ec2.internal in Controller

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container cni-plugins

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-4bfdn

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Started

Started container ovnkube-controller

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c"

default

cloud-node-controller

ip-10-0-141-167.ec2.internal

Synced

Node synced successfully

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-141-167.ec2.internal

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421"

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6"

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-p6k5v

openshift-network-operator

kubelet

iptables-alerter-pppj5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0"

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0"

openshift-cluster-node-tuning-operator

kubelet

tuned-85mbp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d"

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-w22tj

kube-system

kubelet

konnectivity-agent-7lh45

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64"

openshift-multus

kubelet

multus-xsl6l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-8hzhw

kube-system

daemonset-controller

global-pull-secret-syncer

SuccessfulCreate

Created pod: global-pull-secret-syncer-t86n6

openshift-image-registry

kubelet

node-ca-tqshj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9"
(x3)

openshift-ingress

service-controller

router-default

UpdatedLoadBalancer

Updated load balancer with new hosts

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container bond-cni-plugin

openshift-dns

kubelet

node-resolver-p6k5v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8"

default

ovnk-controlplane

ip-10-0-134-217.ec2.internal

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-134-217.ec2.internal, error getting gateway config for node ip-10-0-134-217.ec2.internal: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-134-217.ec2.internal", failed to update chassis to local for local node ip-10-0-134-217.ec2.internal, error: failed to parse node chassis-id for node - ip-10-0-134-217.ec2.internal, error: k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-134-217.ec2.internal]

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6" in 996ms (996ms including waiting). Image size: 412926967 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1"

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1" in 988ms (988ms including waiting). Image size: 408523640 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lt5hd

Unhealthy

Readiness probe failed:

openshift-image-registry

kubelet

node-ca-vrpzt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" in 12.208s (12.208s including waiting). Image size: 480736321 bytes.
(x6)

kube-system

kubelet

global-pull-secret-syncer-6fwnt

FailedMount

MountVolume.SetUp failed for volume "original-pull-secret" : object "kube-system"/"original-pull-secret" not registered
(x17)

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-multus

kubelet

network-metrics-daemon-b6hrq

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x7)

openshift-multus

kubelet

network-metrics-daemon-b6hrq

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered
(x10)

kube-system

kubelet

global-pull-secret-syncer-6fwnt

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-fx8ng

CSRApproved

CSR "csr-fx8ng" has been approved
(x7)

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-zhkz8" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-rb7d6
(x6)

openshift-multus

kubelet

network-metrics-daemon-9nk69

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

default

kubelet

ip-10-0-134-217.ec2.internal

NodeReady

Node ip-10-0-134-217.ec2.internal status is now: NodeReady

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-4lslj
(x6)

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-fh8bz" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" in 5.957s (5.957s including waiting). Image size: 974678236 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: whereabouts-cni-bincopy

openshift-dns

kubelet

node-resolver-4sxvb

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-4sxvb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 16.835s (16.835s including waiting). Image size: 534708291 bytes.

openshift-image-registry

kubelet

node-ca-vrpzt

Created

Created container: node-ca

kube-system

kubelet

konnectivity-agent-p49zs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64" in 16.515s (16.515s including waiting). Image size: 474198918 bytes.

kube-system

kubelet

konnectivity-agent-p49zs

Created

Created container: konnectivity-agent

kube-system

kubelet

konnectivity-agent-p49zs

Started

Started container konnectivity-agent

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c" in 16.514s (16.514s including waiting). Image size: 533474192 bytes.

openshift-network-operator

kubelet

iptables-alerter-p49tx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 16.834s (16.834s including waiting). Image size: 534708291 bytes.

openshift-image-registry

kubelet

node-ca-vrpzt

Started

Started container node-ca

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container kube-rbac-proxy-node

openshift-dns

kubelet

node-resolver-4sxvb

Started

Started container dns-node-resolver

openshift-cluster-node-tuning-operator

kubelet

tuned-s5jjg

Started

Started container tuned

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container whereabouts-cni-bincopy

openshift-cluster-node-tuning-operator

kubelet

tuned-s5jjg

Created

Created container: tuned

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container ovn-acl-logging

openshift-multus

kubelet

multus-94cqk

Started

Started container kube-multus

openshift-multus

kubelet

multus-94cqk

Created

Created container: kube-multus

openshift-multus

kubelet

multus-94cqk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" in 16.876s (16.876s including waiting). Image size: 1267137864 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: ovn-acl-logging

openshift-cluster-node-tuning-operator

kubelet

tuned-s5jjg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d" in 16.844s (16.844s including waiting). Image size: 701151772 bytes.

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Started

Started container csi-driver

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Created

Created container: csi-driver

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0" in 16.843s (16.843s including waiting). Image size: 514965743 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" in 16.927s (16.927s including waiting). Image size: 1592330346 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container ovn-controller

openshift-network-operator

kubelet

iptables-alerter-p49tx

Started

Started container iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-p49tx

Created

Created container: iptables-alerter

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01" in 1.162s (1.162s including waiting). Image size: 426505480 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8"

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container northd

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Created

Created container: csi-node-driver-registrar

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Started

Started container csi-node-driver-registrar

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-xrffc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container sbdb

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068" in 1.289s (1.289s including waiting). Image size: 426337527 bytes.

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Created

Created container: csi-liveness-probe

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-zhm4z

Started

Started container csi-liveness-probe

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-141-167.ec2.internal

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421" in 14.059s (14.059s including waiting). Image size: 488332864 bytes.
(x6)

kube-system

kubelet

global-pull-secret-syncer-t86n6

FailedMount

MountVolume.SetUp failed for volume "original-pull-secret" : object "kube-system"/"original-pull-secret" not registered
(x6)

openshift-multus

kubelet

network-metrics-daemon-8hzhw

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Started

Started container ovnkube-controller
(x6)

openshift-network-diagnostics

kubelet

network-check-target-w22tj

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-jkzp9" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6"

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container cni-plugins

default

ovnkube-csr-approver-controller

csr-hg4sw

CSRApproved

CSR "csr-hg4sw" has been approved

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8" in 4.572s (4.572s including waiting). Image size: 727300480 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-42xf8

Created

Created container: ovnkube-controller

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6" in 1.051s (1.051s including waiting). Image size: 412926967 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container kube-rbac-proxy-node

openshift-image-registry

kubelet

node-ca-tqshj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" in 17.533s (17.533s including waiting). Image size: 480736321 bytes.

openshift-multus

kubelet

multus-xsl6l

Started

Started container kube-multus

openshift-multus

kubelet

multus-xsl6l

Created

Created container: kube-multus

openshift-multus

kubelet

multus-xsl6l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" in 17.787s (17.787s including waiting). Image size: 1267137864 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: bond-cni-plugin

openshift-cluster-node-tuning-operator

kubelet

tuned-85mbp

Started

Started container tuned

kube-system

kubelet

konnectivity-agent-7lh45

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f9b333f47df911dd05a9a10cb8ea988e97d3174490b348b8e46a161d9581d64" in 17.547s (17.547s including waiting). Image size: 474198918 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-85mbp

Created

Created container: tuned

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7e28d9938f33314269d8079649373b4befb36895b8636c4c2ec3c0fee47c91c" in 17.559s (17.559s including waiting). Image size: 533474192 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-dns

kubelet

node-resolver-p6k5v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 17.541s (17.541s including waiting). Image size: 534708291 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1"

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: ovn-acl-logging

openshift-network-operator

kubelet

iptables-alerter-pppj5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:331caf7efdfbf739b1585570e0004ebd8b5301a6977fbc7b2c64a07475354bc8" in 17.619s (17.619s including waiting). Image size: 534708291 bytes.

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-141-167.ec2.internal

Created

Created container: haproxy

kube-system

kubelet

kube-apiserver-proxy-ip-10-0-141-167.ec2.internal

Started

Started container haproxy

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" in 17.565s (17.565s including waiting). Image size: 1592330346 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-85mbp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f2123db9953f98da0e43c266e6a8a070bf221533b995f87b7e358cc7498ca6d" in 17.591s (17.591s including waiting). Image size: 701151772 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container bond-cni-plugin

default

ovnk-controlplane

ip-10-0-137-228.ec2.internal

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-137-228.ec2.internal, error getting gateway config for node ip-10-0-137-228.ec2.internal: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-137-228.ec2.internal", failed to update chassis to local for local node ip-10-0-137-228.ec2.internal, error: failed to parse node chassis-id for node - ip-10-0-137-228.ec2.internal, error: k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-137-228.ec2.internal]

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abb4d487159fbc9b7148c690cd3a6ee638680f8f879ff6195ca1be5b393705b0" in 17.592s (17.592s including waiting). Image size: 514965743 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1" in 1.012s (1.012s including waiting). Image size: 408523640 bytes.

openshift-network-operator

kubelet

iptables-alerter-pppj5

Created

Created container: iptables-alerter

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01"

openshift-dns

kubelet

node-resolver-p6k5v

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-p6k5v

Started

Started container dns-node-resolver

openshift-image-registry

kubelet

node-ca-tqshj

Started

Started container node-ca

openshift-image-registry

kubelet

node-ca-tqshj

Created

Created container: node-ca

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Created

Created container: csi-driver

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container northd

openshift-network-operator

kubelet

iptables-alerter-pppj5

Started

Started container iptables-alerter

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Started

Started container csi-driver

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: northd

kube-system

kubelet

konnectivity-agent-7lh45

Started

Started container konnectivity-agent

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8"

kube-system

kubelet

konnectivity-agent-7lh45

Created

Created container: konnectivity-agent

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df"

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: routeoverride-cni

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Started

Started container csi-node-driver-registrar

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Created

Created container: csi-node-driver-registrar

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49ae6b645f147135aff6d0f464fcd64972abf57733f027318ca7376691eece01" in 984ms (984ms including waiting). Image size: 426505480 bytes.
(x21)

openshift-cluster-storage-operator

storage-status-controller-statussyncer_storage

aws-cloud-controller-manager

OperatorStatusChanged

(combined from similar events): Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nVolumeDataSourceValidatorDeploymentControllerProgressing: Waiting for Deployment to deploy pods" to "VolumeDataSourceValidatorDeploymentControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:232019e2eb8e13139570277b223ff822086fa83edc73958cbf919d6b57118068" in 1.013s (1.013s including waiting). Image size: 426337527 bytes.
(x16)

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Created

Created container: csi-liveness-probe

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: sbdb

openshift-cluster-csi-drivers

kubelet

aws-ebs-csi-driver-node-kmhfn

Started

Started container csi-liveness-probe

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container sbdb
(x16)

openshift-multus

kubelet

network-metrics-daemon-9nk69

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

default

ovnkube-csr-approver-controller

csr-8p6h9

CSRApproved

CSR "csr-8p6h9" has been approved
(x13)

kube-system

kubelet

global-pull-secret-syncer-t86n6

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

kube-system

kubelet

global-pull-secret-syncer-6fwnt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0"

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-nlf5r

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-9thxk

kube-system

multus

global-pull-secret-syncer-6fwnt

AddedInterface

Add eth0 [10.132.0.4/23] from ovn-kubernetes

default

kubelet

ip-10-0-137-228.ec2.internal

NodeReady

Node ip-10-0-137-228.ec2.internal status is now: NodeReady

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c303211b1d12d2daa9d18ed470194f6ce86e46174cc35d12b38f06834cb65cb0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77f7e33000484395db2216d431d7f91158cbb0ddee564b65288f4ec47e3188b8" in 4.414s (4.414s including waiting). Image size: 727300480 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container cni-plugins

default

ovnkube-csr-approver-controller

csr-gqfsc

CSRApproved

CSR "csr-gqfsc" has been approved

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6"

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Created

Created container: ovnkube-controller

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb"

openshift-network-diagnostics

multus

network-check-target-bvrrk

AddedInterface

Add eth0 [10.133.0.3/23] from ovn-kubernetes

openshift-ovn-kubernetes

kubelet

ovnkube-node-4bfdn

Started

Started container ovnkube-controller

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91034a2e8fa729a060ad18831a3c6e5de5d2b7b3de437b198ddc24fcb724dcf6" in 809ms (809ms including waiting). Image size: 412926967 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1"

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:18d788b8deb049d24dfdb101371f6d2211a5e731bacd64b08adb97f66b6c4eb1" in 605ms (605ms including waiting). Image size: 408523640 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df"

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" in 6.646s (6.646s including waiting). Image size: 974678236 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container whereabouts-cni-bincopy

kube-system

kubelet

global-pull-secret-syncer-6fwnt

Started

Started container global-pull-secret-syncer

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" already present on machine

kube-system

kubelet

global-pull-secret-syncer-6fwnt

Created

Created container: global-pull-secret-syncer

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Started

Started container whereabouts-cni

kube-system

kubelet

global-pull-secret-syncer-6fwnt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0" in 5.303s (5.303s including waiting). Image size: 753864795 bytes.
(x16)

openshift-network-diagnostics

kubelet

network-check-target-w22tj

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-sg7kx

Created

Created container: kube-multus-additional-cni-plugins
(x16)

openshift-multus

kubelet

network-metrics-daemon-8hzhw

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-8frr5

CSRApproved

CSR "csr-8frr5" has been approved

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-cmsst

default

kubelet

ip-10-0-141-167.ec2.internal

NodeReady

Node ip-10-0-141-167.ec2.internal status is now: NodeReady

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

Started

Started container network-check-target-container

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-ffzqh

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb" in 6.242s (6.242s including waiting). Image size: 644526840 bytes.

openshift-network-diagnostics

kubelet

network-check-target-bvrrk

Created

Created container: network-check-target-container

kube-system

kubelet

global-pull-secret-syncer-t86n6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0"

kube-system

multus

global-pull-secret-syncer-t86n6

AddedInterface

Add eth0 [10.133.0.5/23] from ovn-kubernetes

openshift-network-diagnostics

multus

network-check-target-w22tj

AddedInterface

Add eth0 [10.134.0.4/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-target-w22tj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb"

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" in 6.089s (6.089s including waiting). Image size: 974678236 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c4d9f8fd250636b548d533d8f0af8bd5494ad6e5026569cefc634f0283d50df" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-2scbl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a61807a85f6a37f17eb42644bbc038bc04fc489626435da28b0d2c4a30f7e02a" already present on machine

kube-system

kubelet

global-pull-secret-syncer-t86n6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0" in 4.412s (4.412s including waiting). Image size: 753864795 bytes.

openshift-network-diagnostics

kubelet

network-check-target-w22tj

Created

Created container: network-check-target-container

kube-system

kubelet

global-pull-secret-syncer-t86n6

Started

Started container global-pull-secret-syncer

kube-system

kubelet

global-pull-secret-syncer-t86n6

Created

Created container: global-pull-secret-syncer

openshift-network-diagnostics

kubelet

network-check-target-w22tj

Started

Started container network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-w22tj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb" in 3.018s (3.018s including waiting). Image size: 644526840 bytes.

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb"

openshift-network-diagnostics

multus

network-check-target-j6s9c

AddedInterface

Add eth0 [10.132.0.3/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb" in 2.586s (2.586s including waiting). Image size: 644526840 bytes.

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

Started

Started container network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-j6s9c

Created

Created container: network-check-target-container
(x8)

openshift-dns

kubelet

dns-default-rb7d6

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found
(x8)

openshift-ingress-canary

kubelet

ingress-canary-4lslj

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found
(x8)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

FailedMount

MountVolume.SetUp failed for volume "registry-tls" : secret "image-registry-tls" not found
(x8)

openshift-dns

kubelet

dns-default-9thxk

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found
(x8)

openshift-ingress-canary

kubelet

ingress-canary-nlf5r

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found
(x8)

openshift-ingress-canary

kubelet

ingress-canary-ffzqh

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found
(x8)

openshift-dns

kubelet

dns-default-cmsst

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-75587bd455

SuccessfulCreate

Created pod: cluster-monitoring-operator-75587bd455-6p57k

openshift-insights

replicaset-controller

insights-operator-585dfdc468

SuccessfulCreate

Created pod: insights-operator-585dfdc468-w75nz

openshift-ingress

replicaset-controller

router-default-589c889464

SuccessfulCreate

Created pod: router-default-589c889464-99f7x

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-6769c5d45

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-6769c5d45-gjmkn

openshift-cluster-storage-operator

replicaset-controller

volume-data-source-validator-7c6cbb6c87

SuccessfulCreate

Created pod: volume-data-source-validator-7c6cbb6c87-m7bf5

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-6dc5bdb6b4

SuccessfulCreate

Created pod: cluster-samples-operator-6dc5bdb6b4-qm2z5

openshift-image-registry

replicaset-controller

image-registry-6fd4d896fc

SuccessfulCreate

Created pod: image-registry-6fd4d896fc-ltlnc

openshift-network-diagnostics

replicaset-controller

network-check-source-8894fc9bd

SuccessfulCreate

Created pod: network-check-source-8894fc9bd-g7h5c

openshift-service-ca-operator

replicaset-controller

service-ca-operator-d6fc45fc5

SuccessfulCreate

Created pod: service-ca-operator-d6fc45fc5-f2jrk

openshift-console-operator

replicaset-controller

console-operator-9d4b6777b

SuccessfulCreate

Created pod: console-operator-9d4b6777b-jztj7

openshift-cluster-storage-operator

kubelet

volume-data-source-validator-7c6cbb6c87-m7bf5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c723e377c3175f59c650292a466b33d439addfb0d2b23b3bc11a8b8ddacac301"

openshift-network-diagnostics

kubelet

network-check-source-8894fc9bd-g7h5c

Started

Started container check-endpoints

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9266efef4f547d3c51222430c7e0c69f1db8790c8bb649c08a211afecb9045d9"

openshift-service-ca-operator

kubelet

service-ca-operator-d6fc45fc5-f2jrk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9daea498e11f31b7b26c352f210883c28fd26c3fe4913f264f18c2debd6e7fa9"

openshift-network-diagnostics

multus

network-check-source-8894fc9bd-g7h5c

AddedInterface

Add eth0 [10.133.0.10/23] from ovn-kubernetes

openshift-service-ca-operator

multus

service-ca-operator-d6fc45fc5-f2jrk

AddedInterface

Add eth0 [10.134.0.9/23] from ovn-kubernetes

openshift-console-operator

multus

console-operator-9d4b6777b-jztj7

AddedInterface

Add eth0 [10.133.0.9/23] from ovn-kubernetes

openshift-insights

kubelet

insights-operator-585dfdc468-w75nz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:657372851aabfea0afc7153b56cc92699f5f4fd3602d02e0536b7cf6db5a3003"

openshift-insights

multus

insights-operator-585dfdc468-w75nz

AddedInterface

Add eth0 [10.133.0.12/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-8894fc9bd-g7h5c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:321285dca5a2f9b911e21badd1e51ce49841ddc45c5c859b3a29f7982d7376cb" already present on machine

openshift-network-diagnostics

kubelet

network-check-source-8894fc9bd-g7h5c

Created

Created container: check-endpoints

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-6769c5d45-gjmkn

AddedInterface

Add eth0 [10.133.0.11/23] from ovn-kubernetes

openshift-cluster-storage-operator

multus

volume-data-source-validator-7c6cbb6c87-m7bf5

AddedInterface

Add eth0 [10.133.0.8/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f76a74458025ce3949072ad4a42dff7b49b25dcf13de81204e47f697e9cb8523"

openshift-cluster-storage-operator

data-source-validator-leader/volume-data-source-validator-7c6cbb6c87-m7bf5

data-source-validator-leader

LeaderElection

volume-data-source-validator-7c6cbb6c87-m7bf5 became leader

openshift-cluster-storage-operator

kubelet

volume-data-source-validator-7c6cbb6c87-m7bf5

Created

Created container: volume-data-source-validator

openshift-cluster-storage-operator

kubelet

volume-data-source-validator-7c6cbb6c87-m7bf5

Started

Started container volume-data-source-validator

openshift-cluster-storage-operator

kubelet

volume-data-source-validator-7c6cbb6c87-m7bf5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c723e377c3175f59c650292a466b33d439addfb0d2b23b3bc11a8b8ddacac301" in 1.514s (1.514s including waiting). Image size: 466390858 bytes.

openshift-service-ca-operator

kubelet

service-ca-operator-d6fc45fc5-f2jrk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9daea498e11f31b7b26c352f210883c28fd26c3fe4913f264f18c2debd6e7fa9" in 2.184s (2.184s including waiting). Image size: 525417835 bytes.

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-d6fc45fc5-f2jrk_f7ab75d6-5eaa-4a67-a8fa-9781c25b53a0 became leader

openshift-insights

kubelet

insights-operator-585dfdc468-w75nz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:657372851aabfea0afc7153b56cc92699f5f4fd3602d02e0536b7cf6db5a3003" in 3.013s (3.013s including waiting). Image size: 523364984 bytes.

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-service-ca-operator

service-ca-operator

service-ca-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f76a74458025ce3949072ad4a42dff7b49b25dcf13de81204e47f697e9cb8523" in 3.021s (3.021s including waiting). Image size: 516739625 bytes.

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9266efef4f547d3c51222430c7e0c69f1db8790c8bb649c08a211afecb9045d9" in 3.33s (3.33s including waiting). Image size: 523904184 bytes.

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-6769c5d45-gjmkn_4b1a36a6-325c-4eb3-8b84-85115f05299d became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca

replicaset-controller

service-ca-865cb79987

SuccessfulCreate

Created pod: service-ca-865cb79987-fj94h

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")
(x2)

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.20.19"

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-865cb79987 from 0 to 1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.20.19"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca

kubelet

service-ca-865cb79987-fj94h

Created

Created container: service-ca-controller

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-service-ca

kubelet

service-ca-865cb79987-fj94h

Started

Started container service-ca-controller

openshift-kube-storage-version-migrator

multus

migrator-74bb7799d9-ngnjp

AddedInterface

Add eth0 [10.134.0.11/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14318c2321b81771440ed4021e547da23e03f4b3d00957941d2293ccaac35e47"

openshift-service-ca

multus

service-ca-865cb79987-fj94h

AddedInterface

Add eth0 [10.134.0.10/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

replicaset-controller

migrator-74bb7799d9

SuccessfulCreate

Created pod: migrator-74bb7799d9-ngnjp

openshift-service-ca

kubelet

service-ca-865cb79987-fj94h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9daea498e11f31b7b26c352f210883c28fd26c3fe4913f264f18c2debd6e7fa9" already present on machine

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-74bb7799d9 from 0 to 1

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Created

Created container: migrator
(x5)

openshift-ingress

kubelet

router-default-589c889464-99f7x

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Started

Started container migrator
(x5)

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-865cb79987-fj94h_7ef1b360-b55b-4377-a74f-80e7467f0fc4 became leader

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14318c2321b81771440ed4021e547da23e03f4b3d00957941d2293ccaac35e47" already present on machine
(x3)

openshift-multus

kubelet

network-metrics-daemon-9nk69

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found
(x5)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

FailedMount

MountVolume.SetUp failed for volume "registry-tls" : secret "image-registry-tls" not found
(x5)

openshift-ingress

kubelet

router-default-589c889464-99f7x

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "router-metrics-certs-default" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-75587bd455-6p57k

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14318c2321b81771440ed4021e547da23e03f4b3d00957941d2293ccaac35e47" in 1.378s (1.378s including waiting). Image size: 444987648 bytes.

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-74bb7799d9-ngnjp

Started

Started container graceful-termination
(x3)

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

BackOff

Back-off restarting failed container console-operator in pod console-operator-9d4b6777b-jztj7_openshift-console-operator(8fe0a454-c595-4c12-b2ff-afc448fddec1)
(x6)

openshift-image-registry

image-registry-operator

openshift-image-registry

DeploymentUpdated

Updated Deployment.apps/image-registry -n openshift-image-registry because it changed

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03fe096bd1c553d4fe69a2c0126a6b27d5f02ffd7696ada6c30e9e8d1c5e71"

openshift-cluster-samples-operator

multus

cluster-samples-operator-6dc5bdb6b4-qm2z5

AddedInterface

Add eth0 [10.134.0.7/23] from ovn-kubernetes
(x3)

openshift-multus

kubelet

network-metrics-daemon-8hzhw

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-image-registry

multus

image-registry-6fd4d896fc-ltlnc

AddedInterface

Add eth0 [10.133.0.14/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-75587bd455-6p57k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44f128d9f1230c738ba97316ddfb90ef7edb28f481c500334984c1058922a2fc"

openshift-ingress

multus

router-default-589c889464-99f7x

AddedInterface

Add eth0 [10.134.0.8/23] from ovn-kubernetes

openshift-monitoring

multus

cluster-monitoring-operator-75587bd455-6p57k

AddedInterface

Add eth0 [10.133.0.13/23] from ovn-kubernetes

openshift-ingress

kubelet

router-default-589c889464-99f7x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:294af5c64228434d1ed6ee8ea3ac802e3c999aa847223e3b2efa18425a9fe421" already present on machine

openshift-ingress

kubelet

router-default-589c889464-99f7x

Created

Created container: router

openshift-ingress

kubelet

router-default-589c889464-99f7x

Started

Started container router

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Started

Started container cluster-samples-operator-watch

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringmetricsserverclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringMetricsServerClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-kube-controller-manager

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

CSRApproval

The CSR "system:openshift:openshift-monitoring-tw2hv" has been approved

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-kube-controller-manager

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

CSRApproval

The CSR "system:openshift:openshift-monitoring-gg9v9" has been approved

openshift-kube-controller-manager

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

CSRApproval

The CSR "system:openshift:openshift-monitoring-5cmvk" has been approved

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Started

Started container cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Created

Created container: cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03fe096bd1c553d4fe69a2c0126a6b27d5f02ffd7696ada6c30e9e8d1c5e71" in 1.817s (1.817s including waiting). Image size: 495982242 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-75587bd455-6p57k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44f128d9f1230c738ba97316ddfb90ef7edb28f481c500334984c1058922a2fc" in 1.59s (1.59s including waiting). Image size: 499266277 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-75587bd455-6p57k

Created

Created container: cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-75587bd455-6p57k

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03fe096bd1c553d4fe69a2c0126a6b27d5f02ffd7696ada6c30e9e8d1c5e71" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

Created

Created container: cluster-samples-operator-watch

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-5cmvk" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-tw2hv" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringmetricsserverclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-gg9v9" is created for OpenShiftMonitoringMetricsServerClientCertRequester

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing
(x3)

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

Created

Created container: console-operator
(x2)

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9266efef4f547d3c51222430c7e0c69f1db8790c8bb649c08a211afecb9045d9" already present on machine
(x3)

openshift-console-operator

kubelet

console-operator-9d4b6777b-jztj7

Started

Started container console-operator

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.20.19"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"" "namespaces" "" "openshift-network-console"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.20.19"}]

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-9d4b6777b-jztj7_1b974b0e-42d5-4725-ada1-55174d0658e7 became leader

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found",Upgradeable changed from Unknown to True ("All is well")

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-image-registry

multus

image-registry-66f5f8d5cd-rgqhw

AddedInterface

Add eth0 [10.132.0.9/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-rb7d6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac"

openshift-dns

multus

dns-default-rb7d6

AddedInterface

Add eth0 [10.132.0.10/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-4lslj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2"

openshift-ingress-canary

multus

ingress-canary-4lslj

AddedInterface

Add eth0 [10.132.0.11/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-rb7d6

Created

Created container: dns

openshift-dns

kubelet

dns-default-rb7d6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-dns

kubelet

dns-default-rb7d6

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-rb7d6

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-rb7d6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac" in 1.972s (1.972s including waiting). Image size: 480938200 bytes.

openshift-dns

kubelet

dns-default-rb7d6

Started

Started container dns

openshift-ingress-canary

kubelet

ingress-canary-4lslj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2" in 1.906s (1.906s including waiting). Image size: 514858876 bytes.

openshift-ingress-canary

kubelet

ingress-canary-4lslj

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-4lslj

Started

Started container serve-healthcheck-canary

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console",Upgradeable changed from True to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)",Upgradeable message changed from "DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console" to "DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nDownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console"

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Upgradeable message changed from "DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nDownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nDownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)"

openshift-dns

multus

dns-default-9thxk

AddedInterface

Add eth0 [10.133.0.6/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-9thxk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac"

openshift-dns

kubelet

dns-default-9thxk

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-9thxk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-dns

kubelet

dns-default-9thxk

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-9thxk

Started

Started container dns

openshift-dns

kubelet

dns-default-9thxk

Created

Created container: dns

openshift-dns

kubelet

dns-default-9thxk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac" in 1.47s (1.47s including waiting). Image size: 480938200 bytes.

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)"

openshift-dns

kubelet

dns-default-cmsst

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac"

openshift-dns

multus

dns-default-cmsst

AddedInterface

Add eth0 [10.134.0.5/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-dns

kubelet

dns-default-cmsst

Created

Created container: dns

openshift-ingress-canary

kubelet

ingress-canary-nlf5r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.20.19, 0 replicas available"

openshift-ingress-canary

multus

ingress-canary-nlf5r

AddedInterface

Add eth0 [10.133.0.7/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-cmsst

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c862e4b31529b530f930a8d0e2a75b53d2092f392a29d037d0312169e1d4a1ac" in 1.321s (1.321s including waiting). Image size: 480938200 bytes.

openshift-dns

kubelet

dns-default-cmsst

Started

Started container dns

openshift-dns

kubelet

dns-default-cmsst

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-dns

kubelet

dns-default-cmsst

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-cmsst

Started

Started container kube-rbac-proxy

openshift-ingress-canary

kubelet

ingress-canary-nlf5r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2" in 1.471s (1.471s including waiting). Image size: 514858876 bytes.

openshift-ingress-canary

kubelet

ingress-canary-nlf5r

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-nlf5r

Created

Created container: serve-healthcheck-canary

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:49016->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:49016->172.31.0.10:53: read: connection refused"

openshift-ingress-canary

multus

ingress-canary-ffzqh

AddedInterface

Add eth0 [10.134.0.6/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-ffzqh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2"

openshift-ingress-canary

kubelet

ingress-canary-ffzqh

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-ffzqh

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-ffzqh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98c903cc8c2ea492f4c9047febaa42525c20c5b414d4de2ce3df5eb65ef899e2" in 1.429s (1.429s including waiting). Image size: 514858876 bytes.

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:49016->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:36393->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:49016->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:36393->172.31.0.10:53: read: connection refused"
(x3)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Unhealthy

Liveness probe failed: HTTP probe failed with statuscode: 503

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:36393->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60424->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:36393->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60424->172.31.0.10:53: read: connection refused"
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-d6fc45fc5-f2jrk

Created

Created container: service-ca-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-d6fc45fc5-f2jrk

Started

Started container service-ca-operator

openshift-service-ca-operator

kubelet

service-ca-operator-d6fc45fc5-f2jrk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9daea498e11f31b7b26c352f210883c28fd26c3fe4913f264f18c2debd6e7fa9" already present on machine

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-d6fc45fc5-f2jrk_321a0a72-406d-4df4-9edd-4430112ad14d became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60424->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57397->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60424->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57397->172.31.0.10:53: read: connection refused"

openshift-service-ca-operator

service-ca-operator

service-ca-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x5)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Started

Started container kube-storage-version-migrator-operator
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Created

Created container: kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-6769c5d45-gjmkn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f76a74458025ce3949072ad4a42dff7b49b25dcf13de81204e47f697e9cb8523" already present on machine

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-6769c5d45-gjmkn_017df804-f938-4080-8684-720ad2011b7e became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57397->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52012->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57397->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52012->172.31.0.10:53: read: connection refused"

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-insights

kubelet

insights-operator-585dfdc468-w75nz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:657372851aabfea0afc7153b56cc92699f5f4fd3602d02e0536b7cf6db5a3003" already present on machine
(x2)

openshift-insights

kubelet

insights-operator-585dfdc468-w75nz

Started

Started container insights-operator
(x2)

openshift-insights

kubelet

insights-operator-585dfdc468-w75nz

Created

Created container: insights-operator
(x4)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

ProbeError

Liveness probe error: HTTP probe failed with statuscode: 503 body: {"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]}
(x4)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Unhealthy

Liveness probe failed: HTTP probe failed with statuscode: 503

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52012->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48736->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52012->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48736->172.31.0.10:53: read: connection refused"
(x6)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 503
(x4)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

ProbeError

Liveness probe error: HTTP probe failed with statuscode: 503 body: {"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]}

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused",Upgradeable changed from False to True ("All is well")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48736->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com in route downloads in namespace openshift-console\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48736->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused"

openshift-multus

multus

network-metrics-daemon-b6hrq

AddedInterface

Add eth0 [10.132.0.5/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8"

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8" in 1.182s (1.182s including waiting). Image size: 450507899 bytes.

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-b6hrq

Started

Started container kube-rbac-proxy

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41378->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45595->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41378->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41378->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57164->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41378->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57164->172.31.0.10:53: read: connection refused"

openshift-multus

multus

network-metrics-daemon-9nk69

AddedInterface

Add eth0 [10.133.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-9nk69

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8"

openshift-multus

kubelet

network-metrics-daemon-9nk69

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-multus

kubelet

network-metrics-daemon-9nk69

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8" in 1.032s (1.032s including waiting). Image size: 450507899 bytes.

openshift-multus

kubelet

network-metrics-daemon-9nk69

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-9nk69

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-9nk69

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-9nk69

Created

Created container: kube-rbac-proxy

openshift-multus

multus

network-metrics-daemon-8hzhw

AddedInterface

Add eth0 [10.134.0.3/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57164->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56933->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57164->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56933->172.31.0.10:53: read: connection refused"

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58fb4327e682995ba0eee3fff6d4f4e228f8c4ff1d08b2822b54dee2db41ebf8" in 971ms (971ms including waiting). Image size: 450507899 bytes.

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-8hzhw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56933->172.31.0.10:53: read: connection refused")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56933->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43158->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56933->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43158->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43158->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46546->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43158->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46546->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46546->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34156->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46546->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34156->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34156->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35402->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34156->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35402->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35402->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57629->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35402->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57629->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57629->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59669->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57629->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59669->172.31.0.10:53: read: connection refused"
(x15)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 503 body: {"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]}

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59669->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56150->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59669->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56150->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56150->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56110->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56150->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56110->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56110->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48087->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:56110->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48087->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48087->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59429->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:48087->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59429->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59429->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60062->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59429->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60062->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60062->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51320->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:60062->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51320->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51320->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:58160->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51320->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:58160->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:58160->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52986->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:58160->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52986->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52986->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35028->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52986->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35028->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35028->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41433->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:35028->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41433->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41433->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45615->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:41433->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45615->172.31.0.10:53: read: connection refused"
(x24)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 503 body: {"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]}

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45615->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:40543->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45615->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:40543->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:40543->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:39549->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:40543->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:39549->172.31.0.10:53: read: connection refused"
(x6)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Created

Created container: registry
(x6)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Started

Started container registry
(x6)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:39549->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45407->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:39549->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45407->172.31.0.10:53: read: connection refused"
(x6)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Started

Started container registry
(x6)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Created

Created container: registry
(x6)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45407->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59096->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:45407->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59096->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59096->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43319->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:59096->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43319->172.31.0.10:53: read: connection refused"
(x6)

openshift-image-registry

kubelet

image-registry-6fd4d896fc-ltlnc

Killing

Container registry failed liveness probe, will be restarted

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43319->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34027->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:43319->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34027->172.31.0.10:53: read: connection refused"
(x6)

openshift-image-registry

kubelet

image-registry-66f5f8d5cd-rgqhw

Killing

Container registry failed liveness probe, will be restarted

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34027->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46667->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:34027->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46667->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46667->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44221->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:46667->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44221->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44221->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51490->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44221->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51490->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51490->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57729->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:51490->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57729->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57729->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44102->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:57729->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44102->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44102->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:38276->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:44102->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:38276->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:38276->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52565->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:38276->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52565->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52565->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:32826->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:52565->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:32826->172.31.0.10:53: read: connection refused"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:32826->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:37594->172.31.0.10:53: read: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:32826->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:37594->172.31.0.10:53: read: connection refused"
(x8)

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6dc5bdb6b4-qm2z5

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-86nvs" : [configmap "kube-root-ca.crt" not found, configmap "openshift-service-ca.crt" not found]

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

kube-controller-manager-6b4c4757d-lsrm6_73a51fba-6870-4eda-9e83-355e0ac77604 became leader

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console

replicaset-controller

downloads-6bcc868b7

SuccessfulCreate

Created pod: downloads-6bcc868b7-7knnb

openshift-insights

daemonset-controller

insights-runtime-extractor

SuccessfulCreate

Created pod: insights-runtime-extractor-g58gj

openshift-insights

daemonset-controller

insights-runtime-extractor

SuccessfulCreate

Created pod: insights-runtime-extractor-2l9ld

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151"

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Started

Started container kube-rbac-proxy

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Created

Created container: kube-rbac-proxy

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-insights

multus

insights-runtime-extractor-j9pck

AddedInterface

Add eth0 [10.132.0.12/23] from ovn-kubernetes

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151"

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Started

Started container kube-rbac-proxy

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Created

Created container: kube-rbac-proxy

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-insights

multus

insights-runtime-extractor-g58gj

AddedInterface

Add eth0 [10.134.0.12/23] from ovn-kubernetes

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-57cf98b594 from 0 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-57cf98b594

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-57cf98b594-mdwnn

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-57cf98b594-mdwnn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5619914e3382d3af1528734314e50825c6f630262a2ffe2e32def74d81bff56"

openshift-monitoring

multus

prometheus-operator-admission-webhook-57cf98b594-mdwnn

AddedInterface

Add eth0 [10.134.0.13/23] from ovn-kubernetes

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151"

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Started

Started container kube-rbac-proxy

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Created

Created container: kube-rbac-proxy

openshift-insights

multus

insights-runtime-extractor-2l9ld

AddedInterface

Add eth0 [10.133.0.15/23] from ovn-kubernetes

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.20.19"}]

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.20.19"

default

node-controller

ip-10-0-134-217.ec2.internal

RegisteredNode

Node ip-10-0-134-217.ec2.internal event: Registered Node ip-10-0-134-217.ec2.internal in Controller

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringmetricsserverclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringMetricsServerClientCertRequester is available

default

node-controller

ip-10-0-137-228.ec2.internal

RegisteredNode

Node ip-10-0-137-228.ec2.internal event: Registered Node ip-10-0-137-228.ec2.internal in Controller

default

node-controller

ip-10-0-141-167.ec2.internal

RegisteredNode

Node ip-10-0-141-167.ec2.internal event: Registered Node ip-10-0-141-167.ec2.internal in Controller

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-insights

daemonset-controller

insights-runtime-extractor

SuccessfulCreate

Created pod: insights-runtime-extractor-j9pck

openshift-image-registry

replicaset-controller

image-registry-6fd4d896fc

SuccessfulDelete

Deleted pod: image-registry-6fd4d896fc-ltlnc

openshift-image-registry

multus

image-registry-f6dccbfd7-k4p6p

AddedInterface

Add eth0 [10.134.0.14/23] from ovn-kubernetes

openshift-image-registry

kubelet

image-registry-f6dccbfd7-k4p6p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44136b348d0b6fc6ca07bd61902f4901177c2185045a72b42b2a1d2ad050e0f9" already present on machine

openshift-image-registry

kubelet

image-registry-f6dccbfd7-k4p6p

Created

Created container: registry

openshift-image-registry

kubelet

image-registry-f6dccbfd7-k4p6p

Started

Started container registry

openshift-image-registry

replicaset-controller

image-registry-f6dccbfd7

SuccessfulCreate

Created pod: image-registry-f6dccbfd7-k4p6p

openshift-image-registry

deployment-controller

image-registry

ScalingReplicaSet

Scaled down replica set image-registry-6fd4d896fc from 1 to 0

openshift-image-registry

deployment-controller

image-registry

ScalingReplicaSet

Scaled up replica set image-registry-f6dccbfd7 from 0 to 1

openshift-console

multus

console-758f9c8856-gpqgw

AddedInterface

Add eth0 [10.133.0.17/23] from ovn-kubernetes

openshift-network-console

replicaset-controller

networking-console-plugin-cb95c66f6

SuccessfulCreate

Created pod: networking-console-plugin-cb95c66f6-7htkt

openshift-console

kubelet

console-758f9c8856-gpqgw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0"

openshift-console

replicaset-controller

console-758f9c8856

SuccessfulCreate

Created pod: console-758f9c8856-gpqgw
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-network-console

kubelet

networking-console-plugin-cb95c66f6-7htkt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d1004cb2c8bf8e26abd9b096907f2773938fb541870db93ff3e40a4b524f31b"

openshift-network-console

multus

networking-console-plugin-cb95c66f6-7htkt

AddedInterface

Add eth0 [10.133.0.18/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-6bcc868b7 from 0 to 1
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-758f9c8856 from 0 to 1

openshift-console

multus

downloads-6bcc868b7-7knnb

AddedInterface

Add eth0 [10.133.0.16/23] from ovn-kubernetes

openshift-console

kubelet

downloads-6bcc868b7-7knnb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7945fa555c35f23e52e2bfb6550375d92f5dff043ecca99b7aa3383339ba0d91"

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-57cf98b594-mdwnn

Created

Created container: prometheus-operator-admission-webhook

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151" in 673ms (673ms including waiting). Image size: 405607150 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-57cf98b594-mdwnn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5619914e3382d3af1528734314e50825c6f630262a2ffe2e32def74d81bff56" in 914ms (914ms including waiting). Image size: 440805257 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-57cf98b594-mdwnn

Started

Started container prometheus-operator-admission-webhook

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151" in 826ms (826ms including waiting). Image size: 405607150 bytes.

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Created

Created container: exporter

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Started

Started container exporter

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78"

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Created

Created container: exporter

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Started

Started container exporter

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78"

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78"

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Started

Started container exporter

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Created

Created container: exporter

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f3fc18a785145e05d543030637e66298ab943dae18b52aa448f6023bd0d8151" in 596ms (596ms including waiting). Image size: 405607150 bytes.

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Started

Started container extractor

openshift-network-console

kubelet

networking-console-plugin-cb95c66f6-7htkt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d1004cb2c8bf8e26abd9b096907f2773938fb541870db93ff3e40a4b524f31b" in 1.256s (1.256s including waiting). Image size: 435916423 bytes.

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78" in 1.253s (1.253s including waiting). Image size: 480669231 bytes.

openshift-network-console

kubelet

networking-console-plugin-cb95c66f6-7htkt

Started

Started container networking-console-plugin

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78" in 1.221s (1.221s including waiting). Image size: 480669231 bytes.

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Created

Created container: extractor

openshift-insights

kubelet

insights-runtime-extractor-g58gj

Started

Started container extractor

openshift-insights

kubelet

insights-runtime-extractor-j9pck

Created

Created container: extractor

openshift-network-console

kubelet

networking-console-plugin-cb95c66f6-7htkt

Created

Created container: networking-console-plugin

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-5676c8c784 from 0 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-5676c8c784

SuccessfulCreate

Created pod: prometheus-operator-5676c8c784-qmc6p

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Started

Started container extractor

openshift-console

kubelet

console-758f9c8856-gpqgw

Created

Created container: console

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c12cfce95cb7b52d129eec71dec91a2cd7e820895ce1b9a111d68e3e3909aa78" in 2.82s (2.82s including waiting). Image size: 480669231 bytes.

openshift-insights

kubelet

insights-runtime-extractor-2l9ld

Created

Created container: extractor

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53c1f2b5271f98ba295bffe3d340dae30b7adbab811abfbbb4e2962c828de4a6"

openshift-monitoring

multus

prometheus-operator-5676c8c784-qmc6p

AddedInterface

Add eth0 [10.133.0.19/23] from ovn-kubernetes

openshift-console

kubelet

console-758f9c8856-gpqgw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0" in 3.694s (3.694s including waiting). Image size: 622989096 bytes.

openshift-console

kubelet

console-758f9c8856-gpqgw

Started

Started container console

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-5676c8c784-qmc6p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53c1f2b5271f98ba295bffe3d340dae30b7adbab811abfbbb4e2962c828de4a6" in 1.18s (1.18s including waiting). Image size: 460015067 bytes.

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:37594->172.31.0.10:53: read: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": EOF",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": dial tcp: lookup console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com on 172.31.0.10:53: read udp 10.133.0.9:37594->172.31.0.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": EOF"

openshift-monitoring

kubelet

node-exporter-75wkc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

kubelet

node-exporter-295ld

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-69db897b98

SuccessfulCreate

Created pod: kube-state-metrics-69db897b98-g982r

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-69db897b98 from 0 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/node-exporter-accelerators-collector-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-9d44df66c from 0 to 1

openshift-monitoring

replicaset-controller

openshift-state-metrics-9d44df66c

SuccessfulCreate

Created pod: openshift-state-metrics-9d44df66c-l7k5k

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterrolebindings.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

kubelet

node-exporter-hbqsr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1"

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-75wkc

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-hbqsr

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-295ld

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

kubelet

node-exporter-295ld

Created

Created container: init-textfile

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8029cdbdb141db727b89ca3f252f0878cf7f2ff2710e3865c52051bffef7e7ab"

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-69db897b98-g982r

AddedInterface

Add eth0 [10.134.0.15/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:043b48f312aaf6b02960d8ada763ddb9d3f90b1baa6c2e7cf98cd33f5a6e2795"

openshift-monitoring

kubelet

node-exporter-295ld

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" in 721ms (721ms including waiting). Image size: 420585449 bytes.

openshift-monitoring

kubelet

node-exporter-295ld

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-75wkc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" in 731ms (731ms including waiting). Image size: 420585449 bytes.

openshift-monitoring

kubelet

node-exporter-75wkc

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-75wkc

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-hbqsr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" in 703ms (703ms including waiting). Image size: 420585449 bytes.

openshift-monitoring

kubelet

node-exporter-hbqsr

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-hbqsr

Started

Started container init-textfile

openshift-monitoring

multus

openshift-state-metrics-9d44df66c-l7k5k

AddedInterface

Add eth0 [10.133.0.20/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-295ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.133.0.21/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:043b48f312aaf6b02960d8ada763ddb9d3f90b1baa6c2e7cf98cd33f5a6e2795" in 1.117s (1.117s including waiting). Image size: 455965021 bytes.

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-69db897b98-g982r

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-hbqsr

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-295ld

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-295ld

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-295ld

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-295ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

node-exporter-295ld

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-75wkc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" already present on machine

openshift-monitoring

kubelet

node-exporter-75wkc

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-75wkc

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-75wkc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

node-exporter-75wkc

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-75wkc

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-hbqsr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b4514b20ab8125dc4f2ee9661f8363c837031926b40e7c54a36d1efa08456d1" already present on machine

openshift-monitoring

kubelet

node-exporter-hbqsr

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-hbqsr

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-hbqsr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

node-exporter-hbqsr

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8029cdbdb141db727b89ca3f252f0878cf7f2ff2710e3865c52051bffef7e7ab" in 879ms (879ms including waiting). Image size: 433217797 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-9d44df66c-l7k5k

Created

Created container: openshift-state-metrics

openshift-monitoring

replicaset-controller

thanos-querier-79b5647b94

SuccessfulCreate

Created pod: thanos-querier-79b5647b94-kphgj

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc1a8dd650246763b70016e6a9b993684cd895b641698b318e0651843be46a4"

openshift-monitoring

multus

thanos-querier-79b5647b94-kphgj

AddedInterface

Add eth0 [10.134.0.16/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9" in 1.047s (1.047s including waiting). Image size: 440292722 bytes.

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": EOF" to "RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com): Get \"https://console-openshift-console.apps.db24fc8e-a688-4988-83ac-51abadbf06a4.prod.konfluxeaas.com\": EOF"

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-79b5647b94 from 0 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0db64b64455cbdced82f3ce13f5de55d5c61203b153f4330ada518d98f15a521"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-523mmfh8ume0a -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-576679f874

SuccessfulCreate

Created pod: metrics-server-576679f874-8p2ck

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-576679f874 from 0 to 1

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.20.19, 1 replicas available"

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5522f37104f3fac57567fa2e9ec65601f60b8cea3603b12dcda26db8c481f404"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-74748b6745 from 0 to 1

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

multus

metrics-server-576679f874-8p2ck

AddedInterface

Add eth0 [10.134.0.17/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"" "namespaces" "" "openshift-network-console"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"" "namespaces" "" "openshift-monitoring"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"" "namespaces" "" "openshift-network-console"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-monitoring

kubelet

metrics-server-576679f874-8p2ck

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9742eb7f19f176ef0413fab6a0a8fb33b12972fad6305c921ce7ec2160e6f62a"

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc1a8dd650246763b70016e6a9b993684cd895b641698b318e0651843be46a4" in 1.754s (1.754s including waiting). Image size: 515257905 bytes.

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: thanos-query

openshift-monitoring

replicaset-controller

monitoring-plugin-7dccd58f55

SuccessfulCreate

Created pod: monitoring-plugin-7dccd58f55-r9cgr

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-7dccd58f55 from 0 to 1

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container kube-rbac-proxy

openshift-console

replicaset-controller

console-74748b6745

SuccessfulCreate

Created pod: console-74748b6745-5hk4w

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5522f37104f3fac57567fa2e9ec65601f60b8cea3603b12dcda26db8c481f404" in 557ms (557ms including waiting). Image size: 415229828 bytes.

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-7f9nk2qgmde3q -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-79b5647b94-kphgj

Created

Created container: kube-rbac-proxy-metrics

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-monitoring

kubelet

metrics-server-576679f874-8p2ck

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9742eb7f19f176ef0413fab6a0a8fb33b12972fad6305c921ce7ec2160e6f62a" in 1.255s (1.255s including waiting). Image size: 478464565 bytes.

openshift-monitoring

kubelet

metrics-server-576679f874-8p2ck

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-576679f874-8p2ck

Created

Created container: metrics-server

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-console

kubelet

console-74748b6745-5hk4w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.133.0.24/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5522f37104f3fac57567fa2e9ec65601f60b8cea3603b12dcda26db8c481f404"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

monitoring-plugin-7dccd58f55-r9cgr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cabb511cb310970571a48bdd4c1f2ef763ce529a5a919e0a64772c73f2813f9"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console

kubelet

downloads-6bcc868b7-7knnb

Started

Started container download-server

openshift-console

kubelet

downloads-6bcc868b7-7knnb

Created

Created container: download-server

openshift-console

kubelet

downloads-6bcc868b7-7knnb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7945fa555c35f23e52e2bfb6550375d92f5dff043ecca99b7aa3383339ba0d91" in 19.645s (19.645s including waiting). Image size: 2219245123 bytes.

openshift-monitoring

multus

monitoring-plugin-7dccd58f55-r9cgr

AddedInterface

Add eth0 [10.133.0.23/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console

kubelet

console-74748b6745-5hk4w

Started

Started container console

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-console

kubelet

console-74748b6745-5hk4w

Created

Created container: console

openshift-console

multus

console-74748b6745-5hk4w

AddedInterface

Add eth0 [10.133.0.22/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0db64b64455cbdced82f3ce13f5de55d5c61203b153f4330ada518d98f15a521" in 8.077s (8.077s including waiting). Image size: 468536373 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5522f37104f3fac57567fa2e9ec65601f60b8cea3603b12dcda26db8c481f404" in 805ms (805ms including waiting). Image size: 415229828 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a21eac79f5f40f67e0dd4f95643b2ec4e9e5766091790fcf0f3d1f4fbd7181f2"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-image-registry

deployment-controller

image-registry

ScalingReplicaSet

Scaled down replica set image-registry-66f5f8d5cd from 1 to 0

openshift-monitoring

kubelet

monitoring-plugin-7dccd58f55-r9cgr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cabb511cb310970571a48bdd4c1f2ef763ce529a5a919e0a64772c73f2813f9" in 2.213s (2.213s including waiting). Image size: 456696728 bytes.

openshift-image-registry

replicaset-controller

image-registry-66f5f8d5cd

SuccessfulDelete

Deleted pod: image-registry-66f5f8d5cd-rgqhw

openshift-monitoring

kubelet

monitoring-plugin-7dccd58f55-r9cgr

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-7dccd58f55-r9cgr

Created

Created container: monitoring-plugin

openshift-console

replicaset-controller

console-758f9c8856

SuccessfulDelete

Deleted pod: console-758f9c8856-gpqgw

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a21eac79f5f40f67e0dd4f95643b2ec4e9e5766091790fcf0f3d1f4fbd7181f2" in 3.496s (3.496s including waiting). Image size: 617255526 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-758f9c8856 from 1 to 0

openshift-console

kubelet

console-758f9c8856-gpqgw

Killing

Stopping container console

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc1a8dd650246763b70016e6a9b993684cd895b641698b318e0651843be46a4"

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc1a8dd650246763b70016e6a9b993684cd895b641698b318e0651843be46a4" in 2.356s (2.356s including waiting). Image size: 515257905 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

telemeter-client-5f5f55ddc7

SuccessfulCreate

Created pod: telemeter-client-5f5f55ddc7-66h44

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-5f5f55ddc7 from 0 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

multus

telemeter-client-5f5f55ddc7-66h44

AddedInterface

Add eth0 [10.133.0.25/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a111be13165f4ef25fdd79d316f876e3aeb09c5add81f6e145111759417b435"

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a111be13165f4ef25fdd79d316f876e3aeb09c5add81f6e145111759417b435" in 1.477s (1.477s including waiting). Image size: 487477363 bytes.

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5c2bd9ad6303f25b717fb59bbd0872fad142656c8c5d2d2629d0f66b19a5b9" already present on machine

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-5f5f55ddc7-66h44

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e1f4bf2daa69c9e69766cebcfced43fa0ee926d4479becdc8a5b05b93ca6e81" already present on machine

openshift-console

replicaset-controller

console-755cd4b745

SuccessfulCreate

Created pod: console-755cd4b745-k4bj5

openshift-console

kubelet

console-755cd4b745-k4bj5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0" already present on machine

openshift-console

multus

console-755cd4b745-k4bj5

AddedInterface

Add eth0 [10.133.0.26/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-755cd4b745 from 0 to 1

openshift-console

kubelet

console-755cd4b745-k4bj5

Started

Started container console

openshift-console

kubelet

console-755cd4b745-k4bj5

Created

Created container: console

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-755cd4b745 from 1 to 0

openshift-console

replicaset-controller

console-5575fcffc4

SuccessfulCreate

Created pod: console-5575fcffc4-cjbgc

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.20.19, 1 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"

openshift-console

kubelet

console-5575fcffc4-cjbgc

Started

Started container console

openshift-console

replicaset-controller

console-755cd4b745

SuccessfulDelete

Deleted pod: console-755cd4b745-k4bj5

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5575fcffc4 from 0 to 1

openshift-console

kubelet

console-5575fcffc4-cjbgc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0" already present on machine

openshift-console

kubelet

console-5575fcffc4-cjbgc

Created

Created container: console

openshift-console

kubelet

console-755cd4b745-k4bj5

Killing

Stopping container console

openshift-console

multus

console-5575fcffc4-cjbgc

AddedInterface

Add eth0 [10.133.0.27/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-74748b6745 from 1 to 0

openshift-console

replicaset-controller

console-74748b6745

SuccessfulDelete

Deleted pod: console-74748b6745-5hk4w

openshift-console

kubelet

console-74748b6745-5hk4w

Killing

Stopping container console

kube-system

daemonset-controller

global-pull-secret-syncer

SuccessfulCreate

Created pod: global-pull-secret-syncer-76ngx

kube-system

kubelet

global-pull-secret-syncer-76ngx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0"

kube-system

multus

global-pull-secret-syncer-76ngx

AddedInterface

Add eth0 [10.134.0.18/23] from ovn-kubernetes

kube-system

kubelet

global-pull-secret-syncer-76ngx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a86fd02d09596be124562146df9ab5bf33cd7cdfde29f701524b250a0e8beec0" in 4.003s (4.003s including waiting). Image size: 753864795 bytes.

kube-system

kubelet

global-pull-secret-syncer-76ngx

Created

Created container: global-pull-secret-syncer

kube-system

kubelet

global-pull-secret-syncer-76ngx

Started

Started container global-pull-secret-syncer

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

default

operator-lifecycle-manager

cert-manager-operator

ResolutionFailed

error using catalogsource openshift-marketplace/redhat-operators: error encountered while listing bundles: rpc error: code = DeadlineExceeded desc = context deadline exceeded

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for cert-manager namespace

kube-system

cert-manager-cainjector-68b757865b-2nd8r_6cbcb6dd-545d-4ba4-bdb3-3685b9676623

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-68b757865b-2nd8r_6cbcb6dd-545d-4ba4-bdb3-3685b9676623 became leader

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-79c8d999ff-4zhq7-external-cert-manager-controller became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-jobset-operator namespace

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

RequirementsUnknown

requirements not yet checked

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

InstallWaiting

installing: waiting for deployment jobset-operator to become ready: deployment "jobset-operator" not available: Deployment does not have minimum availability.

openshift-jobset-operator

deployment-controller

jobset-operator

ScalingReplicaSet

Scaled up replica set jobset-operator-747c5859c7 from 0 to 1

openshift-jobset-operator

replicaset-controller

jobset-operator-747c5859c7

SuccessfulCreate

Created pod: jobset-operator-747c5859c7-jjsvm

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

AllRequirementsMet

all requirements found, attempting install
(x2)

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

InstallSucceeded

waiting for install components to report healthy

openshift-jobset-operator

kubelet

jobset-operator-747c5859c7-jjsvm

Pulling

Pulling image "registry.redhat.io/job-set/jobset-rhel9-operator@sha256:2d4920bf64a24ebf9ee726b363b0db54c5d14ec37935770766458b09e4661ba0"

openshift-jobset-operator

multus

jobset-operator-747c5859c7-jjsvm

AddedInterface

Add eth0 [10.134.0.23/23] from ovn-kubernetes

openshift-jobset-operator

kubelet

jobset-operator-747c5859c7-jjsvm

Started

Started container jobset-operator

openshift-jobset-operator

kubelet

jobset-operator-747c5859c7-jjsvm

Created

Created container: jobset-operator

openshift-jobset-operator

kubelet

jobset-operator-747c5859c7-jjsvm

Pulled

Successfully pulled image "registry.redhat.io/job-set/jobset-rhel9-operator@sha256:2d4920bf64a24ebf9ee726b363b0db54c5d14ec37935770766458b09e4661ba0" in 1.846s (1.846s including waiting). Image size: 223169552 bytes.

openshift-jobset-operator

operator-lifecycle-manager

jobset-operator.v1.0.0

InstallSucceeded

install strategy completed with no errors

openshift-jobset-operator

openshift-jobset-operator

openshift-jobset-operator-lock

LeaderElection

jobset-operator-747c5859c7-jjsvm_ebfb88d1-3618-4308-a51e-e5d0279982ea became leader

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

CertificateCreated

Created Certificate.cert-manager.io/jobset-serving-cert -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/jobset-manager-rolebinding because it was missing

openshift-jobset-operator

cert-manager-certificaterequests-approver

jobset-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/jobset-metrics-reader-rolebinding because it was missing

openshift-jobset-operator

cert-manager-certificaterequests-issuer-acme

jobset-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

IssuerCreated

Created Issuer.cert-manager.io/jobset-selfsigned-issuer -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

ServiceCreated

Created Service/jobset-webhook-service -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/jobset-manager-role because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/jobset-mutating-webhook-configuration because it was missing

openshift-jobset-operator

cert-manager-certificaterequests-issuer-selfsigned

jobset-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openshift-jobset-operator

cert-manager-certificaterequests-issuer-selfsigned

jobset-metrics-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openshift-jobset-operator

cert-manager-certificaterequests-approver

jobset-metrics-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openshift-jobset-operator

cert-manager-certificaterequests-issuer-vault

jobset-metrics-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificates-trigger

jobset-serving-cert

Issuing

Issuing certificate as Secret does not exist

openshift-jobset-operator

cert-manager-certificates-key-manager

jobset-serving-cert

Generated

Stored new private key in temporary Secret resource "jobset-serving-cert-htpqp"

openshift-jobset-operator

cert-manager-certificaterequests-issuer-venafi

jobset-metrics-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificaterequests-issuer-acme

jobset-metrics-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificaterequests-issuer-ca

jobset-metrics-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

CertificateCreated

Created Certificate.cert-manager.io/jobset-metrics-cert -n openshift-jobset-operator because it was missing

openshift-jobset-operator

cert-manager-certificaterequests-issuer-ca

jobset-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificaterequests-issuer-selfsigned

jobset-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificaterequests-issuer-vault

jobset-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/jobset-proxy-rolebinding because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/jobsets.jobset.x-k8s.io because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/jobset-leader-election-role -n openshift-jobset-operator because it was missing

openshift-jobset-operator

cert-manager-certificates-request-manager

jobset-serving-cert

Requested

Created new CertificateRequest resource "jobset-serving-cert-1"

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ServiceAccountCreated

Created ServiceAccount/jobset-controller-manager -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/jobset-metrics-reader because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/jobset-proxy-role because it was missing

openshift-jobset-operator

cert-manager-certificaterequests-issuer-venafi

jobset-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

cert-manager-certificaterequests-issuer-selfsigned

jobset-metrics-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/jobset-validating-webhook-configuration because it was missing

openshift-jobset-operator

cert-manager-certificates-issuing

jobset-serving-cert

Issuing

The certificate has been successfully issued

openshift-jobset-operator

cert-manager-certificates-trigger

jobset-metrics-cert

Issuing

Issuing certificate as Secret does not exist

openshift-jobset-operator

cert-manager-certificates-key-manager

jobset-metrics-cert

Generated

Stored new private key in temporary Secret resource "jobset-metrics-cert-b6cj6"

openshift-jobset-operator

cert-manager-certificates-request-manager

jobset-metrics-cert

Requested

Created new CertificateRequest resource "jobset-metrics-cert-1"

openshift-jobset-operator

cert-manager-certificates-issuing

jobset-metrics-cert

Issuing

The certificate has been successfully issued

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/jobset-manager-secrets-role -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/jobset-prometheus-k8s -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/jobset-prometheus-k8s -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/jobset-leader-election-rolebinding -n openshift-jobset-operator because it was missing

openshift-jobset-operator

kubelet

jobset-controller-manager-5d86bd95b-82mcg

Pulling

Pulling image "registry.redhat.io/job-set/jobset-rhel9@sha256:8a0ce916ed17d4244f97ee967d341532365cbab4b4287639509dee914f50c8a1"

openshift-jobset-operator

multus

jobset-controller-manager-5d86bd95b-82mcg

AddedInterface

Add eth0 [10.134.0.24/23] from ovn-kubernetes

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/jobset-validating-webhook-configuration because it changed

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

DeploymentCreated

Created Deployment.apps/jobset-controller-manager -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/jobset-mutating-webhook-configuration because it changed

openshift-jobset-operator

replicaset-controller

jobset-controller-manager-5d86bd95b

SuccessfulCreate

Created pod: jobset-controller-manager-5d86bd95b-82mcg

openshift-jobset-operator

deployment-controller

jobset-controller-manager

ScalingReplicaSet

Scaled up replica set jobset-controller-manager-5d86bd95b from 0 to 1

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/jobset-controller-manager-metrics-monitor -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator

jobset-operator

ConfigMapCreated

Created ConfigMap/jobset-manager-config -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/jobset-manager-secrets-rolebinding -n openshift-jobset-operator because it was missing

openshift-jobset-operator

openshift-jobset-operator-jobsetoperatorstaticresources-jobsetoperatorstaticresources-staticresources

jobset-operator

ServiceCreated

Created Service/jobset-controller-manager-metrics-service -n openshift-jobset-operator because it was missing

openshift-jobset-operator

kubelet

jobset-controller-manager-5d86bd95b-82mcg

Pulled

Successfully pulled image "registry.redhat.io/job-set/jobset-rhel9@sha256:8a0ce916ed17d4244f97ee967d341532365cbab4b4287639509dee914f50c8a1" in 4.011s (4.011s including waiting). Image size: 182033318 bytes.

openshift-jobset-operator

kubelet

jobset-controller-manager-5d86bd95b-82mcg

Created

Created container: manager

openshift-jobset-operator

jobset-controller-manager-5d86bd95b-82mcg_34adf681-6d31-4573-8b30-5c048fdf487a

6d4f6a47.jobset.x-k8s.io

LeaderElection

jobset-controller-manager-5d86bd95b-82mcg_34adf681-6d31-4573-8b30-5c048fdf487a became leader

openshift-jobset-operator

kubelet

jobset-controller-manager-5d86bd95b-82mcg

Started

Started container manager

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for opendatahub namespace
(x5)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed
(x4)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-9bc45bfd4 from 0 to 1
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")
(x3)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdateFailed

Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again

openshift-console

replicaset-controller

console-9bc45bfd4

SuccessfulCreate

Created pod: console-9bc45bfd4-mxvcv
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well")

openshift-console

multus

console-9bc45bfd4-mxvcv

AddedInterface

Add eth0 [10.133.0.29/23] from ovn-kubernetes

openshift-console

kubelet

console-9bc45bfd4-mxvcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0cc2dc261f075be17ea31eb148cce7fc0b11a4dc06add53d19e4f39df155ba0" already present on machine

openshift-console

kubelet

console-9bc45bfd4-mxvcv

Created

Created container: console

openshift-console

kubelet

console-9bc45bfd4-mxvcv

Started

Started container console
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.20.19, 1 replicas available")
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.20.19, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.20.19, 2 replicas available"
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-console

replicaset-controller

console-5575fcffc4

SuccessfulDelete

Deleted pod: console-5575fcffc4-cjbgc

openshift-console

kubelet

console-5575fcffc4-cjbgc

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5575fcffc4 from 1 to 0

openshift-console

kubelet

console-5575fcffc4-cjbgc

Unhealthy

Readiness probe failed: Get "https://10.133.0.27:8443/health": dial tcp 10.133.0.27:8443: connect: connection refused

openshift-console

kubelet

console-5575fcffc4-cjbgc

ProbeError

Readiness probe error: Get "https://10.133.0.27:8443/health": dial tcp 10.133.0.27:8443: connect: connection refused body:

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for rhai-e2e-progression-tdbgv namespace
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-must-gather-sg7ld namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

CreatedSCCRanges

created SCC ranges for openshift-must-gather-nrz24 namespace