I0128 12:56:35.287474 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ I0128 12:56:35.287591 1 observer_polling.go:159] Starting file observer I0128 12:56:35.548988 1 cmd.go:253] Using service-serving-cert provided certificates I0128 12:56:35.705334 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. I0128 12:56:35.705716 1 observer_polling.go:159] Starting file observer I0128 12:56:38.483725 1 builder.go:304] openshift-cluster-etcd-operator version 4.19.0-202511260712.p2.g10416b8.assembly.stream.el9-10416b8-10416b858f836add882417cdf67314f9368f1f8b I0128 12:56:44.979542 1 secure_serving.go:57] Forcing use of http/1.1 only W0128 12:56:44.979616 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0128 12:56:44.979636 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0128 12:56:44.979677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0128 12:56:44.979693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0128 12:56:44.979710 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0128 12:56:44.979725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0128 12:56:45.281607 1 secure_serving.go:213] Serving securely on [::]:8443 I0128 12:56:45.282401 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0128 12:56:45.282465 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0128 12:56:45.282511 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" I0128 12:56:45.282632 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" I0128 12:56:45.282852 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0128 12:56:45.282909 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0128 12:56:45.282940 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0128 12:56:45.282957 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0128 12:56:45.383965 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0128 12:56:45.384022 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0128 12:56:45.385196 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0128 12:56:45.738126 1 leaderelection.go:257] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... I0128 12:59:12.094542 1 leaderelection.go:271] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock I0128 12:59:12.094725 1 event.go:377] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"0686aef9-a9aa-4af1-85cb-399c450abc45", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-7cf5697694-b478l_adcc1340-48e6-40a0-8f8e-fc0b22fc74a1 became leader I0128 12:59:12.104948 1 starter.go:190] recorded cluster versions: map[etcd:4.19.21 operator:4.19.21 raw-internal:4.19.21] I0128 12:59:12.109673 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0128 12:59:12.112706 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ConsolePluginContentSecurityPolicy", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "GatewayAPI", "GatewayAPIController", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PrivateHostedZoneAWS", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "VSphereDriverConfiguration", "VSphereMultiVCenters", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AutomatedEtcdBackup", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GCPCustomAPIEndpoints", "HighlyAvailableArbiter", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PlatformOperators", "ProcMountType", "SELinuxChangePolicy", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0128 12:59:12.112692 1 starter.go:476] FeatureGates initializedenabled[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CPMSMachineNamePrefix ChunkSizeMiB CloudDualStackNodeIPs ConsolePluginContentSecurityPolicy DisableKubeletCloudCredentialProviders GCPLabelsTags GatewayAPI GatewayAPIController HardwareSpeed IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles MultiArchInstallAWS MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NodeDisruptionPolicy OnClusterBuild PersistentIPsForVirtualization PinnedImages PrivateHostedZoneAWS RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController VSphereDriverConfiguration VSphereMultiVCenters ValidatingAdmissionPolicy]disabled[AWSClusterHostedDNS AutomatedEtcdBackup BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GCPClusterHostedDNS GCPCustomAPIEndpoints HighlyAvailableArbiter ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InsightsRuntimeExtractor KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PlatformOperators ProcMountType SELinuxChangePolicy SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerification SigstoreImageVerificationPKI StreamingCollectionEncodingToJSON StreamingCollectionEncodingToProtobuf TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass VolumeGroupSnapshot] I0128 12:59:12.112743 1 starter.go:531] waiting for cluster version informer sync... I0128 12:59:12.123623 1 starter.go:554] Detected available machine API, starting vertical scaling related controllers and informers... I0128 12:59:12.123975 1 base_controller.go:76] Waiting for caches to sync for ClusterMemberRemovalController I0128 12:59:12.123975 1 base_controller.go:76] Waiting for caches to sync for MachineDeletionHooksController I0128 12:59:12.124079 1 base_controller.go:76] Waiting for caches to sync for MissingStaticPodController I0128 12:59:12.124134 1 base_controller.go:76] Waiting for caches to sync for PruneController I0128 12:59:12.124151 1 base_controller.go:76] Waiting for caches to sync for ScriptController I0128 12:59:12.124175 1 base_controller.go:76] Waiting for caches to sync for DefragController I0128 12:59:12.124181 1 base_controller.go:76] Waiting for caches to sync for FSyncController I0128 12:59:12.124187 1 base_controller.go:82] Caches are synced for FSyncController I0128 12:59:12.124196 1 base_controller.go:119] Starting #1 worker of FSyncController controller ... I0128 12:59:12.124206 1 base_controller.go:76] Waiting for caches to sync for RevisionController I0128 12:59:12.124231 1 base_controller.go:76] Waiting for caches to sync for etcd-InstallerState I0128 12:59:12.124241 1 base_controller.go:76] Waiting for caches to sync for EtcdCertSignerController I0128 12:59:12.124252 1 base_controller.go:76] Waiting for caches to sync for EtcdMembersController I0128 12:59:12.124261 1 base_controller.go:82] Caches are synced for EtcdMembersController I0128 12:59:12.124271 1 base_controller.go:119] Starting #1 worker of EtcdMembersController controller ... I0128 12:59:12.124196 1 envvarcontroller.go:236] Starting EnvVarController I0128 12:59:12.124299 1 base_controller.go:76] Waiting for caches to sync for EtcdCertCleanerController I0128 12:59:12.124308 1 base_controller.go:82] Caches are synced for EtcdCertCleanerController I0128 12:59:12.124312 1 base_controller.go:119] Starting #1 worker of EtcdCertCleanerController controller ... I0128 12:59:12.124332 1 base_controller.go:76] Waiting for caches to sync for EtcdEndpointsController I0128 12:59:12.124350 1 base_controller.go:76] Waiting for caches to sync for etcd I0128 12:59:12.124375 1 base_controller.go:76] Waiting for caches to sync for ConfigObserver I0128 12:59:12.124397 1 base_controller.go:76] Waiting for caches to sync for ClusterMemberController I0128 12:59:12.124255 1 base_controller.go:76] Waiting for caches to sync for BootstrapTeardownController I0128 12:59:12.124457 1 base_controller.go:76] Waiting for caches to sync for etcd-operator-UnsupportedConfigOverrides I0128 12:59:12.124475 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0128 12:59:12.124492 1 base_controller.go:76] Waiting for caches to sync for GuardController I0128 12:59:12.124093 1 base_controller.go:76] Waiting for caches to sync for etcd-UnsupportedConfigOverrides I0128 12:59:12.124127 1 base_controller.go:76] Waiting for caches to sync for etcd-StaticPodState E0128 12:59:12.124588 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced" I0128 12:59:12.124632 1 base_controller.go:76] Waiting for caches to sync for StatusSyncer_etcd I0128 12:59:12.124723 1 base_controller.go:76] Waiting for caches to sync for Installer I0128 12:59:12.124445 1 base_controller.go:76] Waiting for caches to sync for BackingResourceController-StaticResources I0128 12:59:12.124176 1 base_controller.go:76] Waiting for caches to sync for etcd-Node I0128 12:59:12.124232 1 base_controller.go:76] Waiting for caches to sync for TargetConfigController I0128 12:59:12.124844 1 base_controller.go:76] Waiting for caches to sync for EtcdStaticResources-StaticResources I0128 12:59:12.124866 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found E0128 12:59:12.130920 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced" I0128 12:59:12.130945 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found E0128 12:59:12.160972 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced" E0128 12:59:12.199572 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" I0128 12:59:12.224822 1 base_controller.go:82] Caches are synced for ScriptController I0128 12:59:12.224837 1 base_controller.go:119] Starting #1 worker of ScriptController controller ... I0128 12:59:12.224858 1 base_controller.go:82] Caches are synced for etcd-operator-UnsupportedConfigOverrides I0128 12:59:12.224890 1 base_controller.go:119] Starting #1 worker of etcd-operator-UnsupportedConfigOverrides controller ... I0128 12:59:12.224828 1 base_controller.go:82] Caches are synced for DefragController I0128 12:59:12.224938 1 base_controller.go:119] Starting #1 worker of DefragController controller ... I0128 12:59:12.224896 1 base_controller.go:82] Caches are synced for LoggingSyncer I0128 12:59:12.225006 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... E0128 12:59:12.225208 1 base_controller.go:279] "Unhandled Error" err="DefragController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" I0128 12:59:12.224912 1 base_controller.go:82] Caches are synced for StatusSyncer_etcd I0128 12:59:12.225274 1 base_controller.go:119] Starting #1 worker of StatusSyncer_etcd controller ... I0128 12:59:12.224900 1 base_controller.go:82] Caches are synced for etcd-UnsupportedConfigOverrides I0128 12:59:12.225326 1 base_controller.go:119] Starting #1 worker of etcd-UnsupportedConfigOverrides controller ... I0128 12:59:12.225243 1 base_controller.go:82] Caches are synced for etcd-Node I0128 12:59:12.225977 1 base_controller.go:119] Starting #1 worker of etcd-Node controller ... I0128 12:59:12.226275 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} E0128 12:59:12.231669 1 base_controller.go:279] "Unhandled Error" err="DefragController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" I0128 12:59:12.237015 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" I0128 12:59:12.237598 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} E0128 12:59:12.241111 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" E0128 12:59:12.244025 1 base_controller.go:279] "Unhandled Error" err="DefragController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" E0128 12:59:12.246237 1 base_controller.go:279] "Unhandled Error" err="StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io \"etcd\": the object has been modified; please apply your changes to the latest version and try again" E0128 12:59:12.248829 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" E0128 12:59:12.249810 1 base_controller.go:279] "Unhandled Error" err="DefragController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" E0128 12:59:12.250321 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" I0128 12:59:12.250330 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} E0128 12:59:12.255681 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" E0128 12:59:12.266065 1 base_controller.go:279] "Unhandled Error" err="DefragController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" E0128 12:59:12.277611 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" I0128 12:59:12.306827 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" I0128 12:59:12.306852 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} E0128 12:59:12.319131 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" E0128 12:59:12.323598 1 base_controller.go:279] "Unhandled Error" err="EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced" I0128 12:59:12.328530 1 reflector.go:376] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:12.348089 1 etcdcli_pool.go:70] creating a new cached client I0128 12:59:12.362216 1 etcdcli_pool.go:70] creating a new cached client I0128 12:59:12.362239 1 etcdcli_pool.go:70] creating a new cached client I0128 12:59:12.362668 1 etcdcli_pool.go:70] creating a new cached client E0128 12:59:12.401122 1 base_controller.go:279] "Unhandled Error" err="ScriptController reconciliation failed: \"configmap/etcd-pod\": missing env var values" I0128 12:59:12.414072 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.06 %, dbSize: 72654848 I0128 12:59:12.414087 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 33.64 %, dbSize: 112267264 I0128 12:59:12.414092 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 33.82 %, dbSize: 112631808 I0128 12:59:12.425094 1 envvarcontroller.go:242] caches synced I0128 12:59:12.425131 1 base_controller.go:82] Caches are synced for RevisionController I0128 12:59:12.425142 1 base_controller.go:82] Caches are synced for PruneController I0128 12:59:12.425145 1 base_controller.go:119] Starting #1 worker of RevisionController controller ... I0128 12:59:12.425151 1 base_controller.go:119] Starting #1 worker of PruneController controller ... I0128 12:59:12.425192 1 base_controller.go:82] Caches are synced for TargetConfigController I0128 12:59:12.425215 1 base_controller.go:119] Starting #1 worker of TargetConfigController controller ... E0128 12:59:12.425275 1 base_controller.go:279] "Unhandled Error" err="TargetConfigController reconciliation failed: TargetConfigController missing env var values" E0128 12:59:12.501031 1 base_controller.go:279] "Unhandled Error" err="StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io \"etcd\": the object has been modified; please apply your changes to the latest version and try again" I0128 12:59:12.504212 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 12:59:12.527029 1 reflector.go:376] Caches populated for *v1.Namespace from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:12.531260 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.05 %, dbSize: 72654848 I0128 12:59:12.531273 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 33.64 %, dbSize: 112267264 I0128 12:59:12.531277 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 33.82 %, dbSize: 112631808 I0128 12:59:12.704870 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 12:59:12.705076 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" I0128 12:59:12.727052 1 reflector.go:376] Caches populated for *v1.Secret from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 E0128 12:59:12.902649 1 base_controller.go:279] "Unhandled Error" err="StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io \"etcd\": the object has been modified; please apply your changes to the latest version and try again" I0128 12:59:12.926919 1 reflector.go:376] Caches populated for *v1.Service from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:13.129487 1 reflector.go:376] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:13.224086 1 base_controller.go:82] Caches are synced for MachineDeletionHooksController I0128 12:59:13.224088 1 base_controller.go:82] Caches are synced for ClusterMemberRemovalController I0128 12:59:13.224110 1 base_controller.go:119] Starting #1 worker of MachineDeletionHooksController controller ... I0128 12:59:13.224113 1 base_controller.go:119] Starting #1 worker of ClusterMemberRemovalController controller ... I0128 12:59:13.224285 1 base_controller.go:82] Caches are synced for EtcdCertSignerController I0128 12:59:13.224297 1 base_controller.go:119] Starting #1 worker of EtcdCertSignerController controller ... I0128 12:59:13.224376 1 base_controller.go:82] Caches are synced for EtcdEndpointsController I0128 12:59:13.224420 1 base_controller.go:119] Starting #1 worker of EtcdEndpointsController controller ... I0128 12:59:13.224522 1 base_controller.go:82] Caches are synced for BootstrapTeardownController I0128 12:59:13.224532 1 base_controller.go:119] Starting #1 worker of BootstrapTeardownController controller ... I0128 12:59:13.324449 1 request.go:729] Waited for 1.198827696s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/kube-system/secrets?limit=500&resourceVersion=0 I0128 12:59:13.328216 1 reflector.go:376] Caches populated for *v1.Secret from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:13.424918 1 base_controller.go:82] Caches are synced for etcd I0128 12:59:13.424954 1 base_controller.go:119] Starting #1 worker of etcd controller ... I0128 12:59:13.535638 1 reflector.go:376] Caches populated for *v1.Pod from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:13.624273 1 base_controller.go:82] Caches are synced for MissingStaticPodController I0128 12:59:13.624291 1 base_controller.go:119] Starting #1 worker of MissingStaticPodController controller ... I0128 12:59:13.624522 1 base_controller.go:82] Caches are synced for GuardController I0128 12:59:13.624550 1 base_controller.go:82] Caches are synced for etcd-InstallerState I0128 12:59:13.624610 1 base_controller.go:119] Starting #1 worker of etcd-InstallerState controller ... I0128 12:59:13.624556 1 base_controller.go:119] Starting #1 worker of GuardController controller ... I0128 12:59:13.624536 1 base_controller.go:82] Caches are synced for ClusterMemberController I0128 12:59:13.624769 1 base_controller.go:119] Starting #1 worker of ClusterMemberController controller ... I0128 12:59:13.624573 1 base_controller.go:82] Caches are synced for ConfigObserver I0128 12:59:13.624868 1 base_controller.go:119] Starting #1 worker of ConfigObserver controller ... I0128 12:59:13.624585 1 base_controller.go:82] Caches are synced for etcd-StaticPodState I0128 12:59:13.625040 1 base_controller.go:119] Starting #1 worker of etcd-StaticPodState controller ... I0128 12:59:13.625746 1 base_controller.go:82] Caches are synced for Installer I0128 12:59:13.625762 1 base_controller.go:119] Starting #1 worker of Installer controller ... I0128 12:59:13.727097 1 reflector.go:376] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.32.1/tools/cache/reflector.go:251 I0128 12:59:13.825817 1 base_controller.go:82] Caches are synced for EtcdStaticResources-StaticResources I0128 12:59:13.825894 1 base_controller.go:119] Starting #1 worker of EtcdStaticResources-StaticResources controller ... I0128 12:59:13.825836 1 base_controller.go:82] Caches are synced for BackingResourceController-StaticResources I0128 12:59:13.825938 1 base_controller.go:119] Starting #1 worker of BackingResourceController-StaticResources controller ... I0128 12:59:13.947577 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 12:59:13.959891 1 etcdcli_pool.go:70] creating a new cached client I0128 12:59:13.960719 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" I0128 12:59:13.987294 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.01 %, dbSize: 72716288 I0128 12:59:13.987341 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 33.55 %, dbSize: 112267264 I0128 12:59:13.987351 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 33.71 %, dbSize: 112631808 I0128 12:59:14.325055 1 request.go:729] Waited for 1.897420305s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 12:59:15.524921 1 request.go:729] Waited for 1.898783389s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 12:59:16.724920 1 request.go:729] Waited for 2.196820937s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-all-certs I0128 12:59:17.925109 1 request.go:729] Waited for 1.990771007s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 12:59:19.124373 1 request.go:729] Waited for 1.7952326s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 12:59:20.124605 1 request.go:729] Waited for 1.39650068s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-signer I0128 12:59:21.124860 1 request.go:729] Waited for 1.394491601s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-49-9.ec2.internal I0128 12:59:22.324698 1 request.go:729] Waited for 1.176807823s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-66-141.ec2.internal I0128 12:59:23.324880 1 request.go:729] Waited for 1.18860179s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa I0128 13:00:06.126394 1 request.go:729] Waited for 1.164079758s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:00:07.325267 1 request.go:729] Waited for 1.192658454s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:00:07.765658 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-21-80.ec2.internal container \"etcd\" started at 2026-01-28 12:55:20 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:00:07.776396 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-21-80.ec2.internal container \"etcd\" started at 2026-01-28 12:55:20 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:00:07.827420 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 15.23 %, dbSize: 112267264 I0128 13:00:07.827436 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 16.26 %, dbSize: 113721344 I0128 13:00:08.524866 1 request.go:729] Waited for 1.194825035s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:00:09.525034 1 request.go:729] Waited for 1.761064828s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-21-80.ec2.internal I0128 13:00:10.525164 1 request.go:729] Waited for 1.976820902s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:00:11.724905 1 request.go:729] Waited for 1.386865961s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:00:12.924921 1 request.go:729] Waited for 1.396244871s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:00:14.124688 1 request.go:729] Waited for 1.789683106s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:00:15.324814 1 request.go:729] Waited for 1.794961391s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:00:16.324935 1 request.go:729] Waited for 1.795926526s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-21-80.ec2.internal I0128 13:00:17.525426 1 request.go:729] Waited for 1.59477023s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal W0128 13:00:17.534517 1 dynamic_operator_client.go:352] .status.conditions["StaticPodsDegraded"].reason is missing; this will eventually be fatal W0128 13:00:17.534593 1 dynamic_operator_client.go:355] .status.conditions["StaticPodsDegraded"].message is missing; this will eventually be fatal I0128 13:00:17.570875 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:00:17.594185 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-21-80.ec2.internal container \"etcd\" started at 2026-01-28 12:55:20 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:00:17.674670 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 12.47 %, dbSize: 112267264 I0128 13:00:17.674731 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 13.75 %, dbSize: 113721344 I0128 13:00:18.724417 1 request.go:729] Waited for 1.380727298s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:00:19.924809 1 request.go:729] Waited for 1.995484416s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-49-9.ec2.internal I0128 13:00:20.924952 1 request.go:729] Waited for 1.895960096s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:00:21.926022 1 request.go:729] Waited for 1.578172092s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:00:23.125167 1 request.go:729] Waited for 1.395125262s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/restore-etcd-pod I0128 13:00:24.326540 1 request.go:729] Waited for 1.389066157s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:00:25.525304 1 request.go:729] Waited for 1.391211641s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/services/etcd I0128 13:00:26.724256 1 request.go:729] Waited for 1.391453092s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:00:53.056747 1 etcdcli_pool.go:70] creating a new cached client I0128 13:01:14.724721 1 request.go:729] Waited for 1.09458156s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:01:22.129031 1 request.go:729] Waited for 1.039714548s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:01:23.338516 1 request.go:729] Waited for 1.264314944s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd I0128 13:01:24.526301 1 request.go:729] Waited for 1.588987043s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:01:24.601984 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-49-9.ec2.internal container \"etcd\" started at 2026-01-28 12:55:51 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:01:24.630747 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-49-9.ec2.internal container \"etcd\" started at 2026-01-28 12:55:51 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:01:24.650659 1 etcdcli_pool.go:70] creating a new cached client I0128 13:01:24.652471 1 etcdcli_pool.go:70] creating a new cached client I0128 13:01:24.840186 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.05 %, dbSize: 197836800 I0128 13:01:24.840263 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 0.02 %, dbSize: 199929856 I0128 13:01:25.534429 1 request.go:729] Waited for 1.557373184s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:01:26.538517 1 request.go:729] Waited for 1.908852913s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:01:27.726123 1 request.go:729] Waited for 2.177800988s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-21-80.ec2.internal I0128 13:01:28.926584 1 request.go:729] Waited for 1.593917629s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal W0128 13:01:29.730825 1 dynamic_operator_client.go:352] .status.conditions["StaticPodsDegraded"].reason is missing; this will eventually be fatal W0128 13:01:29.730845 1 dynamic_operator_client.go:355] .status.conditions["StaticPodsDegraded"].message is missing; this will eventually be fatal I0128 13:01:29.772906 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:01:29.812978 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-49-9.ec2.internal container \"etcd\" started at 2026-01-28 12:55:51 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:01:29.923030 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.04 %, dbSize: 206315520 I0128 13:01:29.923047 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 0.03 %, dbSize: 208269312 I0128 13:01:29.923053 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 0.01 %, dbSize: 208175104 I0128 13:01:30.124229 1 request.go:729] Waited for 1.365998205s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd I0128 13:01:31.126533 1 request.go:729] Waited for 1.368346297s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:01:32.324701 1 request.go:729] Waited for 1.968391822s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:01:33.527848 1 request.go:729] Waited for 1.59850741s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/restore-etcd-pod I0128 13:01:34.724829 1 request.go:729] Waited for 1.395105621s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-66-141.ec2.internal I0128 13:01:35.727091 1 request.go:729] Waited for 1.592805159s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:01:36.924378 1 request.go:729] Waited for 1.184631603s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-49-9.ec2.internal I0128 13:02:09.924270 1 request.go:729] Waited for 1.170784095s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:02:10.960873 1 request.go:729] Waited for 1.170796232s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:02:11.838824 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-66-141.ec2.internal container \"etcd\" started at 2026-01-28 12:55:37 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:02:11.970724 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-66-141.ec2.internal container \"etcd\" started at 2026-01-28 12:55:37 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:02:12.002709 1 request.go:729] Waited for 1.082049086s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:02:12.315212 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.02 %, dbSize: 262320128 I0128 13:02:12.315234 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 0.04 %, dbSize: 264339456 I0128 13:02:12.315239 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 0.02 %, dbSize: 264540160 I0128 13:02:13.125967 1 request.go:729] Waited for 1.230721798s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd I0128 13:02:14.325806 1 request.go:729] Waited for 2.154876359s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:02:15.524649 1 request.go:729] Waited for 2.189150783s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts I0128 13:02:16.724905 1 request.go:729] Waited for 2.182043913s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal W0128 13:02:17.187458 1 dynamic_operator_client.go:352] .status.conditions["StaticPodsDegraded"].reason is missing; this will eventually be fatal W0128 13:02:17.187475 1 dynamic_operator_client.go:355] .status.conditions["StaticPodsDegraded"].message is missing; this will eventually be fatal I0128 13:02:17.283729 1 status_controller.go:229] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2026-01-28T09:58:23Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2026-01-28T10:13:59Z","message":"NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2026-01-28T09:56:51Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-01-28T09:53:21Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2026-01-28T09:53:21Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} I0128 13:02:17.329155 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 0.18 %, dbSize: 273268736 I0128 13:02:17.329248 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 0.22 %, dbSize: 275329024 I0128 13:02:17.329274 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 0.11 %, dbSize: 274948096 I0128 13:02:17.411848 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-ip-10-0-66-141.ec2.internal container \"etcd\" started at 2026-01-28 12:55:37 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" I0128 13:02:17.924181 1 request.go:729] Waited for 1.970185256s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-66-141.ec2.internal I0128 13:02:18.924552 1 request.go:729] Waited for 1.636327466s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints I0128 13:02:19.924778 1 request.go:729] Waited for 1.959421455s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-21-80.ec2.internal I0128 13:02:20.925880 1 request.go:729] Waited for 1.593019871s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:02:22.124861 1 request.go:729] Waited for 1.574649979s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-serving-metrics-ip-10-0-66-141.ec2.internal I0128 13:02:23.128284 1 request.go:729] Waited for 1.598618078s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:03:14.724213 1 request.go:729] Waited for 1.091159978s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:03:15.724828 1 request.go:729] Waited for 1.196377626s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:04:14.761208 1 request.go:729] Waited for 1.124587201s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:06:13.518219 1 request.go:729] Waited for 1.0696577s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:06:14.718719 1 request.go:729] Waited for 1.087127306s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:06:15.918849 1 request.go:729] Waited for 1.196218217s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:09:13.321518 1 request.go:729] Waited for 1.101656409s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa I0128 13:09:14.520824 1 request.go:729] Waited for 1.594886376s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-signer I0128 13:09:15.521580 1 request.go:729] Waited for 1.797055978s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:09:16.721138 1 request.go:729] Waited for 1.995914356s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:09:17.921545 1 request.go:729] Waited for 1.994296254s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/revision-pruner-8-ip-10-0-66-141.ec2.internal I0128 13:09:19.121196 1 request.go:729] Waited for 1.595464573s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-21-80.ec2.internal I0128 13:09:20.121498 1 request.go:729] Waited for 1.19377283s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:09:21.320857 1 request.go:729] Waited for 1.19373568s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:09:22.321119 1 request.go:729] Waited for 1.193098694s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa I0128 13:10:12.270017 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 18.61 %, dbSize: 312360960 I0128 13:10:12.270034 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 18.95 %, dbSize: 315736064 I0128 13:10:12.270039 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 19.05 %, dbSize: 316108800 I0128 13:12:14.704507 1 request.go:729] Waited for 1.064041188s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:15:15.243847 1 request.go:729] Waited for 1.007838217s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:18:14.845795 1 request.go:729] Waited for 1.008566545s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:19:13.315364 1 request.go:729] Waited for 1.133632117s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:19:14.515698 1 request.go:729] Waited for 1.589389852s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd I0128 13:19:15.715356 1 request.go:729] Waited for 1.986425292s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:19:16.715460 1 request.go:729] Waited for 1.996291143s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-66-141.ec2.internal I0128 13:19:17.715821 1 request.go:729] Waited for 1.992093274s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:19:18.915520 1 request.go:729] Waited for 1.795987103s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:19:20.115414 1 request.go:729] Waited for 1.396061313s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:19:21.315498 1 request.go:729] Waited for 1.195792589s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:19:22.515535 1 request.go:729] Waited for 1.18884057s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:21:12.253368 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 56.97 %, dbSize: 312360960 I0128 13:21:12.253435 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentAttempt' Attempting defrag on member: ip-10-0-49-9.ec2.internal, memberID: acc0d9e1a0b7c947, dbSize: 312360960, dbInUse: 134418432, leader ID: 15124431850926810165 I0128 13:21:12.811424 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentSuccess' etcd member has been defragmented: ip-10-0-49-9.ec2.internal, memberID: 12448188933139319111 I0128 13:21:50.821948 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 56.86 %, dbSize: 315736064 I0128 13:21:50.822455 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentAttempt' Attempting defrag on member: ip-10-0-66-141.ec2.internal, memberID: cacaf85de04d99cd, dbSize: 315736064, dbInUse: 136204288, leader ID: 15124431850926810165 I0128 13:21:51.436896 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentSuccess' etcd member has been defragmented: ip-10-0-66-141.ec2.internal, memberID: 14612765023035824589 I0128 13:22:29.437634 1 defragcontroller.go:302] etcd member "ip-10-0-21-80.ec2.internal" backend store fragmented: 56.93 %, dbSize: 316108800 I0128 13:22:29.438235 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentAttempt' Attempting defrag on member: ip-10-0-21-80.ec2.internal, memberID: d1e4c6a2c018a435, dbSize: 316108800, dbInUse: 136159232, leader ID: 15124431850926810165 I0128 13:22:30.082304 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentSuccess' etcd member has been defragmented: ip-10-0-21-80.ec2.internal, memberID: 15124431850926810165 I0128 13:24:15.463918 1 request.go:729] Waited for 1.090400937s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:25:14.943316 1 request.go:729] Waited for 1.102131598s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:29:13.315639 1 request.go:729] Waited for 1.134478217s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-21-80.ec2.internal I0128 13:29:14.316086 1 request.go:729] Waited for 1.796244656s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod I0128 13:29:15.316224 1 request.go:729] Waited for 1.777760925s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller I0128 13:29:16.515233 1 request.go:729] Waited for 1.99600066s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa I0128 13:29:17.515521 1 request.go:729] Waited for 1.995620192s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-metric-signer I0128 13:29:18.715685 1 request.go:729] Waited for 1.595920256s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:29:19.715926 1 request.go:729] Waited for 1.19652697s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-49-9.ec2.internal I0128 13:29:20.915719 1 request.go:729] Waited for 1.195788259s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-ip-10-0-66-141.ec2.internal I0128 13:29:22.115416 1 request.go:729] Waited for 1.394847198s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-guard-ip-10-0-49-9.ec2.internal I0128 13:29:23.115828 1 request.go:729] Waited for 1.196900755s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/services/etcd I0128 13:30:14.850485 1 request.go:729] Waited for 1.005620093s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd I0128 13:32:12.254778 1 defragcontroller.go:302] etcd member "ip-10-0-49-9.ec2.internal" backend store fragmented: 49.15 %, dbSize: 214204416 I0128 13:32:12.254913 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentAttempt' Attempting defrag on member: ip-10-0-49-9.ec2.internal, memberID: acc0d9e1a0b7c947, dbSize: 214204416, dbInUse: 108933120, leader ID: 15124431850926810165 I0128 13:32:12.771340 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentSuccess' etcd member has been defragmented: ip-10-0-49-9.ec2.internal, memberID: 12448188933139319111 I0128 13:32:14.850657 1 request.go:729] Waited for 1.004547437s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa I0128 13:32:50.783062 1 defragcontroller.go:302] etcd member "ip-10-0-66-141.ec2.internal" backend store fragmented: 49.17 %, dbSize: 214167552 I0128 13:32:50.783264 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentAttempt' Attempting defrag on member: ip-10-0-66-141.ec2.internal, memberID: cacaf85de04d99cd, dbSize: 214167552, dbInUse: 108859392, leader ID: 15124431850926810165 I0128 13:32:51.290535 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"78bf9b93-4c56-4dba-91d7-42dba1335a31", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DefragControllerDefragmentSuccess' etcd member has been defragmented: ip-10-0-66-141.ec2.internal, memberID: 14612765023035824589