W0320 12:43:23.334850 1 cmd.go:257] Using insecure, self-signed certificates I0320 12:43:23.685193 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:23.685638 1 observer_polling.go:159] Starting file observer I0320 12:43:24.320234 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0320 12:43:24.320532 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0320 12:43:24.321357 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0320 12:43:24.321445 1 secure_serving.go:57] Forcing use of http/1.1 only W0320 12:43:24.321469 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0320 12:43:24.321478 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0320 12:43:24.321484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0320 12:43:24.321487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0320 12:43:24.321491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0320 12:43:24.321495 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0320 12:43:24.325091 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0320 12:43:24.325124 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"81ba52b2-c914-4261-a7ef-d7b861149b11", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0320 12:43:24.326070 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0320 12:43:24.326097 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0320 12:43:24.326111 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0320 12:43:24.326122 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0320 12:43:24.326129 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0320 12:43:24.326136 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0320 12:43:24.326408 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-3170075934/tls.crt::/tmp/serving-cert-3170075934/tls.key" I0320 12:43:24.326809 1 secure_serving.go:213] Serving securely on [::]:8443 I0320 12:43:24.326850 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0320 12:43:24.330037 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0320 12:43:24.330063 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0320 12:43:24.330164 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0320 12:43:24.334634 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0320 12:43:24.334654 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0320 12:43:24.339944 1 secretconfigobserver.go:119] support secret does not exist I0320 12:43:24.344423 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0320 12:43:24.348250 1 secretconfigobserver.go:119] support secret does not exist I0320 12:43:24.350137 1 recorder.go:161] Pruning old reports every 6h5m5s, max age is 288h0m0s I0320 12:43:24.355157 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0320 12:43:24.355181 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0320 12:43:24.355186 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0320 12:43:24.355197 1 insightsreport.go:296] Starting report retriever I0320 12:43:24.355197 1 periodic.go:209] Running clusterconfig gatherer I0320 12:43:24.355204 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0320 12:43:24.355245 1 tasks_processing.go:45] number of workers: 32 I0320 12:43:24.355280 1 tasks_processing.go:69] worker 11 listening for tasks. I0320 12:43:24.355343 1 tasks_processing.go:69] worker 25 listening for tasks. I0320 12:43:24.355390 1 tasks_processing.go:71] worker 25 working on image_pruners task. I0320 12:43:24.355282 1 tasks_processing.go:69] worker 31 listening for tasks. I0320 12:43:24.355279 1 tasks_processing.go:69] worker 1 listening for tasks. I0320 12:43:24.355290 1 tasks_processing.go:69] worker 6 listening for tasks. I0320 12:43:24.355290 1 tasks_processing.go:69] worker 12 listening for tasks. I0320 12:43:24.355292 1 tasks_processing.go:69] worker 0 listening for tasks. I0320 12:43:24.355299 1 tasks_processing.go:69] worker 13 listening for tasks. I0320 12:43:24.355302 1 tasks_processing.go:69] worker 22 listening for tasks. I0320 12:43:24.355307 1 tasks_processing.go:69] worker 27 listening for tasks. I0320 12:43:24.355308 1 tasks_processing.go:69] worker 2 listening for tasks. I0320 12:43:24.355310 1 tasks_processing.go:69] worker 14 listening for tasks. I0320 12:43:24.355432 1 tasks_processing.go:71] worker 2 working on sap_pods task. I0320 12:43:24.355447 1 tasks_processing.go:71] worker 12 working on machine_autoscalers task. I0320 12:43:24.355453 1 tasks_processing.go:71] worker 14 working on oauths task. I0320 12:43:24.355495 1 tasks_processing.go:71] worker 13 working on storage_cluster task. I0320 12:43:24.355553 1 tasks_processing.go:71] worker 22 working on machine_healthchecks task. I0320 12:43:24.355316 1 tasks_processing.go:69] worker 3 listening for tasks. I0320 12:43:24.355907 1 tasks_processing.go:71] worker 3 working on pdbs task. I0320 12:43:24.355318 1 tasks_processing.go:69] worker 15 listening for tasks. I0320 12:43:24.355321 1 tasks_processing.go:69] worker 29 listening for tasks. I0320 12:43:24.355321 1 tasks_processing.go:69] worker 26 listening for tasks. I0320 12:43:24.355324 1 tasks_processing.go:69] worker 16 listening for tasks. I0320 12:43:24.355323 1 tasks_processing.go:69] worker 5 listening for tasks. I0320 12:43:24.355328 1 tasks_processing.go:69] worker 30 listening for tasks. I0320 12:43:24.355331 1 tasks_processing.go:69] worker 17 listening for tasks. I0320 12:43:24.355332 1 tasks_processing.go:69] worker 19 listening for tasks. I0320 12:43:24.355332 1 tasks_processing.go:69] worker 23 listening for tasks. I0320 12:43:24.355339 1 tasks_processing.go:69] worker 20 listening for tasks. I0320 12:43:24.355339 1 tasks_processing.go:69] worker 24 listening for tasks. I0320 12:43:24.355340 1 tasks_processing.go:69] worker 18 listening for tasks. I0320 12:43:24.355346 1 tasks_processing.go:69] worker 21 listening for tasks. I0320 12:43:24.355350 1 tasks_processing.go:71] worker 11 working on openshift_machine_api_events task. I0320 12:43:24.356084 1 tasks_processing.go:71] worker 21 working on networks task. I0320 12:43:24.356129 1 tasks_processing.go:71] worker 19 working on image_registries task. I0320 12:43:24.355349 1 tasks_processing.go:69] worker 9 listening for tasks. I0320 12:43:24.356181 1 tasks_processing.go:71] worker 9 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0320 12:43:24.356129 1 tasks_processing.go:71] worker 17 working on authentication task. I0320 12:43:24.356104 1 tasks_processing.go:71] worker 30 working on crds task. I0320 12:43:24.355344 1 tasks_processing.go:69] worker 4 listening for tasks. I0320 12:43:24.355355 1 tasks_processing.go:69] worker 7 listening for tasks. I0320 12:43:24.355359 1 tasks_processing.go:69] worker 10 listening for tasks. I0320 12:43:24.355314 1 tasks_processing.go:69] worker 28 listening for tasks. I0320 12:43:24.355563 1 tasks_processing.go:71] worker 27 working on operators task. I0320 12:43:24.356373 1 tasks_processing.go:71] worker 28 working on pod_network_connectivity_checks task. I0320 12:43:24.355349 1 tasks_processing.go:69] worker 8 listening for tasks. I0320 12:43:24.356453 1 tasks_processing.go:71] worker 8 working on qemu_kubevirt_launcher_logs task. I0320 12:43:24.355577 1 tasks_processing.go:71] worker 31 working on nodes task. I0320 12:43:24.356541 1 tasks_processing.go:71] worker 7 working on overlapping_namespace_uids task. I0320 12:43:24.356663 1 tasks_processing.go:71] worker 10 working on openstack_dataplanedeployments task. I0320 12:43:24.356706 1 tasks_processing.go:71] worker 4 working on openstack_controlplanes task. I0320 12:43:24.355572 1 tasks_processing.go:71] worker 1 working on nodenetworkconfigurationpolicies task. I0320 12:43:24.355583 1 tasks_processing.go:71] worker 0 working on machine_configs task. I0320 12:43:24.355588 1 tasks_processing.go:71] worker 6 working on version task. I0320 12:43:24.356115 1 tasks_processing.go:71] worker 5 working on config_maps task. I0320 12:43:24.356121 1 tasks_processing.go:71] worker 29 working on machine_config_pools task. I0320 12:43:24.356111 1 tasks_processing.go:71] worker 16 working on service_accounts task. I0320 12:43:24.356119 1 tasks_processing.go:71] worker 15 working on metrics task. W0320 12:43:24.357224 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:24.357241 1 tasks_processing.go:71] worker 15 working on dvo_metrics task. I0320 12:43:24.356138 1 tasks_processing.go:71] worker 24 working on openstack_dataplanenodesets task. I0320 12:43:24.357388 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 37.547µs to process 0 records I0320 12:43:24.356141 1 tasks_processing.go:71] worker 20 working on validating_webhook_configurations task. I0320 12:43:24.356147 1 tasks_processing.go:71] worker 18 working on machines task. I0320 12:43:24.356147 1 tasks_processing.go:71] worker 26 working on node_logs task. I0320 12:43:24.356126 1 tasks_processing.go:71] worker 23 working on feature_gates task. I0320 12:43:24.358368 1 tasks_processing.go:71] worker 12 working on infrastructures task. I0320 12:43:24.358386 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 2.905326ms to process 0 records I0320 12:43:24.358598 1 tasks_processing.go:71] worker 2 working on ingress task. I0320 12:43:24.358659 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 3.154902ms to process 0 records I0320 12:43:24.358671 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 3.056764ms to process 0 records I0320 12:43:24.358728 1 tasks_processing.go:71] worker 22 working on ingress_certificates task. E0320 12:43:24.358732 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0320 12:43:24.358746 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 2.93976ms to process 0 records I0320 12:43:24.358730 1 tasks_processing.go:71] worker 13 working on ceph_cluster task. I0320 12:43:24.363048 1 tasks_processing.go:71] worker 4 working on openshift_logging task. I0320 12:43:24.363070 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 6.321534ms to process 0 records E0320 12:43:24.363290 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0320 12:43:24.363340 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 6.760163ms to process 0 records I0320 12:43:24.363293 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0320 12:43:24.363383 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0320 12:43:24.363389 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0320 12:43:24.363407 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0320 12:43:24.363429 1 controller.go:489] The operator is still being initialized I0320 12:43:24.363459 1 controller.go:512] The operator is healthy I0320 12:43:24.363304 1 tasks_processing.go:71] worker 28 working on tsdb_status task. W0320 12:43:24.363581 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:24.363613 1 tasks_processing.go:71] worker 28 working on install_plans task. I0320 12:43:24.363768 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 52.205µs to process 0 records I0320 12:43:24.364930 1 tasks_processing.go:71] worker 11 working on monitoring_persistent_volumes task. I0320 12:43:24.364948 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 8.919204ms to process 0 records I0320 12:43:24.365221 1 tasks_processing.go:71] worker 14 working on lokistack task. I0320 12:43:24.365487 1 recorder.go:75] Recording config/oauth with fingerprint=4b4760e002ddd750ff6f97bd98e2ab9a9c472a5e2037b8476f506f2b47a427d4 I0320 12:43:24.365504 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 9.74996ms to process 1 records I0320 12:43:24.368937 1 tasks_processing.go:71] worker 24 working on proxies task. I0320 12:43:24.369193 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 11.595672ms to process 0 records I0320 12:43:24.369464 1 tasks_processing.go:71] worker 21 working on olm_operators task. I0320 12:43:24.369895 1 recorder.go:75] Recording config/network with fingerprint=cd2eabf74da5764b29da3d185a818451870394e6267b426a32104ce907ec4646 I0320 12:43:24.369955 1 gather.go:177] gatherer "clusterconfig" function "networks" took 13.32824ms to process 1 records I0320 12:43:24.371155 1 tasks_processing.go:71] worker 3 working on cluster_apiserver task. I0320 12:43:24.371200 1 gather_logs.go:145] no pods in namespace were found I0320 12:43:24.371382 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=33195e8ab6b7e087d113f481c930ad17195d36ea7ff76a118adcb3bfd79b3a8d I0320 12:43:24.371452 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=6805c8ceb646898b197faca15045b2f65ca94eba9812a2b3adff778377f078b5 I0320 12:43:24.371490 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=938e1bba5d233e0a8c3bd58a8e9e5ca404a4edeb49ff91823c9bdd6e974bf244 I0320 12:43:24.371529 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 15.228797ms to process 3 records I0320 12:43:24.371565 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 14.751427ms to process 0 records I0320 12:43:24.371628 1 tasks_processing.go:71] worker 8 working on container_images task. I0320 12:43:24.371873 1 recorder.go:75] Recording config/node/ip-10-0-0-106.ec2.internal with fingerprint=a75348b01471973972c93a05580d7a758333531baacc38e4faa37e67686b90b5 I0320 12:43:24.371942 1 recorder.go:75] Recording config/node/ip-10-0-1-250.ec2.internal with fingerprint=204553909ee9b06aa4b7ce541433ec1a84ff3b16633a388029747ca470c533ca I0320 12:43:24.372004 1 recorder.go:75] Recording config/node/ip-10-0-2-74.ec2.internal with fingerprint=d40548f1d2978b0af0f3e47fc4284e9ed862d9548cb07570e292bb264993f09c I0320 12:43:24.372013 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 14.819195ms to process 3 records I0320 12:43:24.372066 1 tasks_processing.go:71] worker 31 working on active_alerts task. W0320 12:43:24.372091 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:24.372168 1 tasks_processing.go:71] worker 25 working on nodenetworkstates task. I0320 12:43:24.372228 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=a851b2119afa22b8aa32c03052447660b69dbd8074fd11f4d4bc23e2d222e21c I0320 12:43:24.372243 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 16.160521ms to process 1 records I0320 12:43:24.372252 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 28.871µs to process 0 records I0320 12:43:24.372290 1 tasks_processing.go:71] worker 31 working on storage_classes task. I0320 12:43:24.373771 1 tasks_processing.go:71] worker 19 working on jaegers task. I0320 12:43:24.374470 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=6d181aea005dabaa2d06d2727549d27363ad06903ce88443f921944f1b57c555 I0320 12:43:24.374494 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 17.596063ms to process 1 records I0320 12:43:24.381091 1 tasks_processing.go:71] worker 1 working on operators_pods_and_events task. I0320 12:43:24.381098 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 24.278409ms to process 0 records E0320 12:43:24.381334 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0320 12:43:24.381345 1 gather.go:177] gatherer "clusterconfig" function "machines" took 23.506077ms to process 0 records I0320 12:43:24.381354 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 16.047103ms to process 0 records I0320 12:43:24.381365 1 tasks_processing.go:71] worker 14 working on certificate_signing_requests task. I0320 12:43:24.381668 1 tasks_processing.go:71] worker 18 working on container_runtime_configs task. I0320 12:43:24.381778 1 recorder.go:75] Recording config/authentication with fingerprint=3148bf0095bdf9b2e49678a9fabfb2fdc0871441badc56ea5d0b5ce2583ee277 I0320 12:43:24.381800 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 25.349742ms to process 1 records I0320 12:43:24.381687 1 tasks_processing.go:71] worker 17 working on silenced_alerts task. W0320 12:43:24.381839 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:24.381854 1 tasks_processing.go:71] worker 17 working on cost_management_metrics_configs task. I0320 12:43:24.381989 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 34.237µs to process 0 records I0320 12:43:24.382054 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 25.283808ms to process 0 records I0320 12:43:24.382099 1 tasks_processing.go:71] worker 10 working on openstack_version task. I0320 12:43:24.382740 1 tasks_processing.go:71] worker 13 working on sap_datahubs task. I0320 12:43:24.382896 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 23.979603ms to process 0 records I0320 12:43:24.383238 1 tasks_processing.go:71] worker 30 working on sap_config task. I0320 12:43:24.383653 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=b642bca76aa8237e01c1fd9ec96a922419ff23027a2ec56afcbe6ebc7711d376 I0320 12:43:24.383992 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=d7a20a0b739a89d4a96ff94a960d6bbdcfa08b67dcfcb90ffefc2e26b0f5fc6d I0320 12:43:24.384009 1 gather.go:177] gatherer "clusterconfig" function "crds" took 26.578485ms to process 2 records I0320 12:43:24.384016 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 19.779441ms to process 0 records I0320 12:43:24.384026 1 tasks_processing.go:71] worker 4 working on clusterroles task. W0320 12:43:24.387436 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0320 12:43:24.389211 1 tasks_processing.go:71] worker 11 working on machine_sets task. I0320 12:43:24.389311 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 24.262485ms to process 0 records I0320 12:43:24.391596 1 tasks_processing.go:71] worker 19 working on schedulers task. I0320 12:43:24.391771 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 17.809297ms to process 0 records I0320 12:43:24.398079 1 tasks_processing.go:71] worker 3 working on support_secret task. I0320 12:43:24.399958 1 recorder.go:75] Recording config/apiserver with fingerprint=f95509b2e4fe7b8992638cdc4b9bd54f471899193448b3b468f72c7494731fa2 I0320 12:43:24.400039 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 26.903037ms to process 1 records I0320 12:43:24.400183 1 recorder.go:75] Recording config/proxy with fingerprint=840f2a0f0def068cfca1d3c736f11cf0b0f3eb7f880cfa91314ce793d111a08a I0320 12:43:24.400224 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 29.971502ms to process 1 records I0320 12:43:24.400268 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 26.782573ms to process 0 records I0320 12:43:24.400307 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0320 12:43:24.400349 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 43.153647ms to process 1 records I0320 12:43:24.400375 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 17.270402ms to process 0 records I0320 12:43:24.400399 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 17.961394ms to process 0 records I0320 12:43:24.400442 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 16.811559ms to process 0 records I0320 12:43:24.400469 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 18.242192ms to process 0 records I0320 12:43:24.400497 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 18.374194ms to process 0 records I0320 12:43:24.400707 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=b43736825a1c274f9f17c65600cf8c0df52f51d93dedb7a57b8352db32f9a80d I0320 12:43:24.400853 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=24be705da17c1ad0f6d124e464b74c59f3dbb34d036abe45711f9097cccbfed4 I0320 12:43:24.400892 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 28.026321ms to process 2 records I0320 12:43:24.401016 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 11.109664ms to process 0 records I0320 12:43:24.401050 1 tasks_processing.go:74] worker 11 stopped. I0320 12:43:24.400767 1 tasks_processing.go:71] worker 13 working on image task. I0320 12:43:24.400776 1 tasks_processing.go:71] worker 24 working on aggregated_monitoring_cr_names task. I0320 12:43:24.400781 1 tasks_processing.go:71] worker 25 working on mutating_webhook_configurations task. I0320 12:43:24.400786 1 tasks_processing.go:74] worker 7 stopped. I0320 12:43:24.400792 1 tasks_processing.go:74] worker 30 stopped. I0320 12:43:24.400796 1 tasks_processing.go:74] worker 10 stopped. I0320 12:43:24.400802 1 tasks_processing.go:74] worker 17 stopped. I0320 12:43:24.400807 1 tasks_processing.go:74] worker 18 stopped. I0320 12:43:24.401721 1 tasks_processing.go:74] worker 2 stopped. I0320 12:43:24.401936 1 recorder.go:75] Recording config/ingress with fingerprint=e85eea23efc68937f15fdc1a9db92bcf050794f86176b03af8fa9ff1417b78d3 W0320 12:43:24.402983 1 operator.go:288] started I0320 12:43:24.403007 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0320 12:43:24.403141 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0320 12:43:24.403158 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0320 12:43:24.403645 1 tasks_processing.go:74] worker 31 stopped. I0320 12:43:24.405730 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 43.106872ms to process 1 records I0320 12:43:24.407147 1 recorder.go:75] Recording config/infrastructure with fingerprint=b7ab82a24e0caf75a3a53d9ec00607d5d3dd79cb41b43163fbae2b803ea9b456 I0320 12:43:24.407173 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 44.078993ms to process 1 records I0320 12:43:24.407450 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=82478048e35465a22d917e3cac2cac2064767661dbf147c40f50e3021ccdf6f3 I0320 12:43:24.407595 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=66bb2756fa5b8bf1486af75b21c10cc8750d2ad5583cb4004d6e3f3fc04bc526 I0320 12:43:24.407641 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=2e285783629ac844b616a491ae63f6576b1bf7e79880a6d6cf16950c5aec35e1 I0320 12:43:24.407686 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=1a188cacdb4b847de82e768f262a5805a6eeec1db145259da5abf266d8c18e5c I0320 12:43:24.407729 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=3f5c8dfb9ef0c58ba1a8746e8a51b5985748b556a5635bfd107a182d3814d621 I0320 12:43:24.407778 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=cfb90a360ceb2e8422c7f49a70cf9b4460f761105a7c1f89d73071f8c1ba35d8 I0320 12:43:24.407821 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=f0a0429c8ed8e9a7f01d0a918fac26791ee44e6a94f4a9e36230cad87993f8ff I0320 12:43:24.407890 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=c419daf14ee54f675f244be99038aebbd97abe3b25ae1578f46c20de3440dd73 I0320 12:43:24.407934 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=6c005fc4bccee2be5ad3d4d30fdc5706e6d065080c94de8e5d1a03547d491e1d I0320 12:43:24.407995 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=df310a53222b39a2796b26a9fca40a5e0e50fd50fbf1be8ceb547eff1baa252f I0320 12:43:24.408043 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=ab1925a2cd8bbe2091ec045d2eed90893661e2766964511711313ad42687a498 I0320 12:43:24.408056 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 45.483674ms to process 11 records I0320 12:43:24.408610 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=719bbe8c2497e7db5333c34f00157560d6f20dd69ee8c1bcf334a39f7676dbe1 I0320 12:43:24.408627 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 11.220316ms to process 1 records I0320 12:43:24.408787 1 recorder.go:75] Recording config/featuregate with fingerprint=63d050101323868a9a65f3e35253c2224d84f4ab17fc6a89c230608d0f1a6689 I0320 12:43:24.408806 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 45.931442ms to process 1 records I0320 12:43:24.408819 1 tasks_processing.go:74] worker 23 stopped. I0320 12:43:24.408827 1 tasks_processing.go:74] worker 12 stopped. I0320 12:43:24.408832 1 tasks_processing.go:74] worker 20 stopped. I0320 12:43:24.408838 1 tasks_processing.go:74] worker 19 stopped. I0320 12:43:24.408935 1 tasks_processing.go:74] worker 3 stopped. E0320 12:43:24.408949 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0320 12:43:24.408961 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 10.783062ms to process 0 records I0320 12:43:24.409231 1 tasks_processing.go:74] worker 4 stopped. I0320 12:43:24.409495 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=5cea41789094087f52f496860a7486486884b444add6068a0de54e4199920717 I0320 12:43:24.409659 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=1d8118ab38bf0e7d6527311efb712002e0bd0224fcfc46a86a8554d9404a9f13 I0320 12:43:24.409674 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 25.194205ms to process 2 records I0320 12:43:24.414909 1 tasks_processing.go:74] worker 14 stopped. I0320 12:43:24.414982 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 33.526443ms to process 0 records I0320 12:43:24.415786 1 prometheus_rules.go:88] Prometheus rules successfully created I0320 12:43:24.415906 1 tasks_processing.go:74] worker 13 stopped. I0320 12:43:24.416072 1 recorder.go:75] Recording config/image with fingerprint=a46c6f7f5c54de0405169ae4adb98f2e3734f8574b1aea89c0cee673d0ddb868 I0320 12:43:24.416130 1 gather.go:177] gatherer "clusterconfig" function "image" took 14.791999ms to process 1 records I0320 12:43:24.416232 1 tasks_processing.go:74] worker 25 stopped. I0320 12:43:24.416373 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=b66ecd7bc1a418fc756040547a28fd40692b7a118c5194ebdc7a4066b70be8f0 I0320 12:43:24.416455 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=4403d6041610b16e7cb0cee33be2334ef12fb880c10672ca3fa632a542f4e87b I0320 12:43:24.416545 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=9c1e76e41a60bdf4a81f6330305021cfc40539f7bd41393c1ac98e49b2c3395b I0320 12:43:24.416607 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 12.304122ms to process 3 records I0320 12:43:24.416948 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0320 12:43:24.416961 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0320 12:43:24.416965 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0320 12:43:24.416971 1 controller.go:212] Source scaController *sca.Controller is not ready I0320 12:43:24.416977 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0320 12:43:24.417006 1 controller.go:489] The operator is still being initialized I0320 12:43:24.417030 1 controller.go:512] The operator is healthy I0320 12:43:24.417589 1 tasks_processing.go:74] worker 21 stopped. I0320 12:43:24.417603 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 48.052365ms to process 0 records I0320 12:43:24.417761 1 tasks_processing.go:74] worker 9 stopped. I0320 12:43:24.417778 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 61.563812ms to process 0 records E0320 12:43:24.424261 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%2753bb959e-8d08-404e-905f-814a194a789a%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:34607->172.30.0.10:53: read: connection refused I0320 12:43:24.424327 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%2753bb959e-8d08-404e-905f-814a194a789a%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:34607->172.30.0.10:53: read: connection refused I0320 12:43:24.424484 1 tasks_processing.go:74] worker 5 stopped. E0320 12:43:24.424525 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0320 12:43:24.424534 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0320 12:43:24.424540 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0320 12:43:24.424552 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0320 12:43:24.424596 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0320 12:43:24.424606 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0320 12:43:24.424613 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0320 12:43:24.424620 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0320 12:43:24.424664 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0320 12:43:24.424675 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0320 12:43:24.424684 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 67.495659ms to process 7 records I0320 12:43:24.427138 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0320 12:43:24.427807 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0320 12:43:24.427831 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0320 12:43:24.431721 1 base_controller.go:82] Caches are synced for ConfigController I0320 12:43:24.431790 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0320 12:43:24.432184 1 tasks_processing.go:74] worker 22 stopped. E0320 12:43:24.432205 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0320 12:43:24.432257 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2p5etjv5l9s11l5f27t83q5nukucgejn-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2p5etjv5l9s11l5f27t83q5nukucgejn-primary-cert-bundle-secret" not found I0320 12:43:24.432328 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=084238f49aa33835fdecbd7bd856a688dc1426a5afee650f30147989e624a489 I0320 12:43:24.432345 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 73.435989ms to process 1 records I0320 12:43:24.436203 1 configmapobserver.go:84] configmaps "insights-config" not found I0320 12:43:24.465554 1 tasks_processing.go:74] worker 26 stopped. I0320 12:43:24.465576 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 107.808065ms to process 0 records I0320 12:43:24.465696 1 tasks_processing.go:74] worker 6 stopped. I0320 12:43:24.466031 1 recorder.go:75] Recording config/version with fingerprint=b9906c6cfdee1bae22e0f043aba946c1a0f76ebfb106150cff392161aac5c807 I0320 12:43:24.466050 1 recorder.go:75] Recording config/id with fingerprint=69dc31c8b121eaf5c98db7c58aaf6f636fb21bada5a158560252a2dd190eae54 I0320 12:43:24.466059 1 gather.go:177] gatherer "clusterconfig" function "version" took 108.739299ms to process 2 records I0320 12:43:24.468944 1 tasks_processing.go:74] worker 0 stopped. I0320 12:43:24.468972 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0320 12:43:24.468981 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 111.99446ms to process 1 records I0320 12:43:24.474693 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload I0320 12:43:24.480831 1 tasks_processing.go:74] worker 8 stopped. W0320 12:43:24.482387 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:53180->172.30.0.10:53: read: connection refused I0320 12:43:24.482403 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:53180->172.30.0.10:53: read: connection refused I0320 12:43:24.482739 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-575cd97545-4r77l with fingerprint=0523b49b7499f41fc04b56c7529fa43f9dc93a75dee305d928223a983e11233f I0320 12:43:24.483171 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-8jnct with fingerprint=b9a272e5e81252ca822d1826237660c6bb6a2b62940ec018cf6d6db5489372d4 I0320 12:43:24.483246 1 recorder.go:75] Recording config/running_containers with fingerprint=de485260e91c38d75a3ff8140f8936612b80d7bf260f824ced7420682e88652a I0320 12:43:24.483260 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 109.177581ms to process 3 records I0320 12:43:24.504489 1 base_controller.go:82] Caches are synced for LoggingSyncer I0320 12:43:24.504504 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0320 12:43:24.506170 1 tasks_processing.go:74] worker 24 stopped. I0320 12:43:24.506188 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 102.461319ms to process 0 records W0320 12:43:25.387529 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0320 12:43:25.422874 1 tasks_processing.go:74] worker 29 stopped. I0320 12:43:25.422891 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 1.065826357s to process 0 records I0320 12:43:25.675708 1 gather_cluster_operator_pods_and_events.go:121] Found 20 pods with 24 containers I0320 12:43:25.675728 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1048576 bytes I0320 12:43:25.676100 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-6j2zx pod in namespace openshift-dns (previous: false). I0320 12:43:25.836248 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0320 12:43:25.901279 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6j2zx pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-6j2zx\" is waiting to start: ContainerCreating" I0320 12:43:25.901305 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-6j2zx\" is waiting to start: ContainerCreating" I0320 12:43:25.901316 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-6j2zx pod in namespace openshift-dns (previous: false). I0320 12:43:26.082188 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6j2zx pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-6j2zx\" is waiting to start: ContainerCreating" I0320 12:43:26.082209 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-6j2zx\" is waiting to start: ContainerCreating" I0320 12:43:26.082220 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-gxg9g pod in namespace openshift-dns (previous: false). I0320 12:43:26.303343 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-gxg9g pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-gxg9g\" is waiting to start: ContainerCreating" I0320 12:43:26.303361 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-gxg9g\" is waiting to start: ContainerCreating" I0320 12:43:26.303372 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-gxg9g pod in namespace openshift-dns (previous: false). W0320 12:43:26.386384 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0320 12:43:26.421931 1 tasks_processing.go:74] worker 27 stopped. I0320 12:43:26.421978 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=83315c1175c9404378aaf7f2da2767733da2fe62d1902a6c610c7c3233e2a214 I0320 12:43:26.422003 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=3da12ac1a4adb871a70f4a62332fbf4f64880720e8bdcc044fccf2512064e05c I0320 12:43:26.422035 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0320 12:43:26.422060 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=a9a784c0ee4cb7b82652b5f4287d388afa28362106ef03e234f2b6473c56de5b I0320 12:43:26.422092 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0320 12:43:26.422116 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=9c63a693696d31a9f4ce69f33ab54ae38a29cc6237483fd9fcaeca2f046e595c I0320 12:43:26.422147 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=29a52ae5892a73c174d4519676f604a7842d583d69f21c2596b146041697ca6a I0320 12:43:26.422172 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=8b7f55a5622da7e64c43de0a47e272c8e43e78c3199c6e4cce8dee375c8dde34 I0320 12:43:26.422186 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=73083584ae36f6f90d7dd302a030a71d72c62ef974d700ef91f7fedda41669dd I0320 12:43:26.422204 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=d3b4698a311657e784533325d5ed594c34e388d7a5a50cb46e02e06f2cd7b980 I0320 12:43:26.422214 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0320 12:43:26.422229 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=571e3dd7fde06cdb55d744ff7cc4a7fd893e8b30422b7d109f95e5f96fa92929 I0320 12:43:26.422238 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0320 12:43:26.422257 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=81fe0f4ea9775530f165a999c57d362efcd9d0f1acdcda5fcc1d467a63c5d5e5 I0320 12:43:26.422265 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0320 12:43:26.422287 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=06fa4b8c62fa9ebe6e392bb4a59139e07082b8e13478d4979ff37474d45b97c9 I0320 12:43:26.422296 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0320 12:43:26.422310 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=b85afcef475e3e8bf9dc36ef13fbff4a73cea08b152154ae6d1e2f0b401ea79c I0320 12:43:26.422429 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=4b21e176c5173f00587e80eff4a221ca1309d0e5e5b8c506fbda9e4bd3add9d5 I0320 12:43:26.422439 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0320 12:43:26.422445 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0320 12:43:26.422465 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0320 12:43:26.422486 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=e94fdd5b30844b74df05ee97b633e6f6466bcc169a476b7cfdef59e8b1f44bdd I0320 12:43:26.422522 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=ca44630f5ac6aee65036dd5895b163b246034645ef4deb8aaffbdb9a1cc8d466 I0320 12:43:26.422533 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0320 12:43:26.422548 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=b2ee35433372f7cf74b848b3fbdbbb6f15cd67462bd9a5ca8ac037ed8550ee06 I0320 12:43:26.422559 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0320 12:43:26.422572 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=7d21cbd5b8a8b1a2381437a103aadf636bba5c7bd3a97f2cb7ab3107b35d3a19 I0320 12:43:26.422586 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=a943820a7722e6d510c3ffa736e59bfb66d55b826603abef46f0f901c94ef0f6 I0320 12:43:26.422599 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=565cfc3e4186bfc746bf3f1fd130b7390a63ba8974e65dd962ac83635ac13d4c I0320 12:43:26.422614 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=f033d2581c39b51c029f18e2e8a855ab7511adc41bbce52778f338bb9b8d99e1 I0320 12:43:26.422634 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=89ecd97626a4f1d3074206c3b06cf8fe533645b191f30f3f947994e0be0caa5d I0320 12:43:26.422643 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0320 12:43:26.422667 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=ba1cf0cc271f7cd0c8cdf7d5e6c845936e3d279b5b644dbb37af33f303fdad8d I0320 12:43:26.422683 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0320 12:43:26.422691 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0320 12:43:26.422699 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.065575486s to process 36 records I0320 12:43:26.483400 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-gxg9g pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-gxg9g\" is waiting to start: ContainerCreating" I0320 12:43:26.483418 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-gxg9g\" is waiting to start: ContainerCreating" I0320 12:43:26.483434 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-wp5nn pod in namespace openshift-dns (previous: false). I0320 12:43:26.705329 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wp5nn pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-wp5nn\" is waiting to start: ContainerCreating" I0320 12:43:26.705350 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-wp5nn\" is waiting to start: ContainerCreating" I0320 12:43:26.705361 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-wp5nn pod in namespace openshift-dns (previous: false). I0320 12:43:26.881291 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wp5nn pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-wp5nn\" is waiting to start: ContainerCreating" I0320 12:43:26.881311 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-wp5nn\" is waiting to start: ContainerCreating" I0320 12:43:26.881321 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-9ttp5 pod in namespace openshift-dns (previous: false). I0320 12:43:27.091874 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:27.091896 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-wr9hz pod in namespace openshift-dns (previous: false). I0320 12:43:27.292208 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:27.292232 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-xwhh4 pod in namespace openshift-dns (previous: false). W0320 12:43:27.387680 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0320 12:43:27.487742 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:27.487815 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-55f6d59d8b-spcwc pod in namespace openshift-image-registry (previous: false). I0320 12:43:27.681147 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-55f6d59d8b-spcwc pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-55f6d59d8b-spcwc\" is waiting to start: ContainerCreating" I0320 12:43:27.681166 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-55f6d59d8b-spcwc\" is waiting to start: ContainerCreating" I0320 12:43:27.681201 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-55f6d59d8b-vk9kj pod in namespace openshift-image-registry (previous: false). I0320 12:43:27.880156 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-55f6d59d8b-vk9kj pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-55f6d59d8b-vk9kj\" is waiting to start: ContainerCreating" I0320 12:43:27.880173 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-55f6d59d8b-vk9kj\" is waiting to start: ContainerCreating" I0320 12:43:27.880216 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-5d7bbcc7fc-9lpl2 pod in namespace openshift-image-registry (previous: false). I0320 12:43:28.082079 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-5d7bbcc7fc-9lpl2 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-5d7bbcc7fc-9lpl2\" is waiting to start: ContainerCreating" I0320 12:43:28.082098 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-5d7bbcc7fc-9lpl2\" is waiting to start: ContainerCreating" I0320 12:43:28.082109 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-2jx4r pod in namespace openshift-image-registry (previous: false). I0320 12:43:28.284622 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:28.284659 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-gsjgv pod in namespace openshift-image-registry (previous: false). W0320 12:43:28.387020 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0320 12:43:28.481317 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:28.481336 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-s2kmt pod in namespace openshift-image-registry (previous: false). I0320 12:43:28.680645 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0320 12:43:28.680701 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5754988699-jcb8f pod in namespace openshift-ingress (previous: false). I0320 12:43:28.881499 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5754988699-jcb8f pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5754988699-jcb8f\" is waiting to start: ContainerCreating" I0320 12:43:28.881570 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5754988699-jcb8f\" is waiting to start: ContainerCreating" I0320 12:43:28.881587 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5754988699-lsx4w pod in namespace openshift-ingress (previous: false). I0320 12:43:29.081487 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5754988699-lsx4w pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5754988699-lsx4w\" is waiting to start: ContainerCreating" I0320 12:43:29.081527 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5754988699-lsx4w\" is waiting to start: ContainerCreating" I0320 12:43:29.081561 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6f94cd84c7-h5d7l pod in namespace openshift-ingress (previous: false). I0320 12:43:29.281594 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6f94cd84c7-h5d7l pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6f94cd84c7-h5d7l\" is waiting to start: ContainerCreating" I0320 12:43:29.281611 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6f94cd84c7-h5d7l\" is waiting to start: ContainerCreating" I0320 12:43:29.281623 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-6tlh8 pod in namespace openshift-ingress-canary (previous: false). W0320 12:43:29.382198 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0320 12:43:29.382235 1 tasks_processing.go:74] worker 15 stopped. E0320 12:43:29.382246 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0320 12:43:29.382256 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0320 12:43:29.382270 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0320 12:43:29.382279 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.024976872s to process 1 records I0320 12:43:29.481897 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-6tlh8 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6tlh8\" is waiting to start: ContainerCreating" I0320 12:43:29.481913 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6tlh8\" is waiting to start: ContainerCreating" I0320 12:43:29.481926 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-84z4p pod in namespace openshift-ingress-canary (previous: false). I0320 12:43:29.682080 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-84z4p pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-84z4p\" is waiting to start: ContainerCreating" I0320 12:43:29.682099 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-84z4p\" is waiting to start: ContainerCreating" I0320 12:43:29.682115 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-swvfk pod in namespace openshift-ingress-canary (previous: false). I0320 12:43:29.881019 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-swvfk pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-swvfk\" is waiting to start: ContainerCreating" I0320 12:43:29.881037 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-swvfk\" is waiting to start: ContainerCreating" I0320 12:43:29.881048 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for migrator container migrator-7d5f866c57-fdmtp pod in namespace openshift-kube-storage-version-migrator (previous: false). I0320 12:43:30.081728 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for graceful-termination container migrator-7d5f866c57-fdmtp pod in namespace openshift-kube-storage-version-migrator (previous: false). I0320 12:43:30.281264 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-74848b4cb9-2x2t8 pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0320 12:43:30.482711 1 tasks_processing.go:74] worker 1 stopped. I0320 12:43:30.482836 1 recorder.go:75] Recording events/openshift-dns with fingerprint=b8e12b68635c5474cde5d6197ec35b65c2163fe1f0b7da3e9ad0cb1081259009 I0320 12:43:30.482954 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=e3b0d165506f3448fc29b5427f984cc5cef112c90f2efeac6451b10ecf179c36 I0320 12:43:30.482993 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=5c4c2716a13477ed94b88810f1520986f067f151cfc6b4d54c65e9244b28edf7 I0320 12:43:30.483041 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=d7f371c1494f8ceace2af483193b39530ff43d65c864347a757bb758cbd414e9 I0320 12:43:30.483063 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=a8f73d50a48c41dd1531116b851b58d24659bd2a0fad965e196621e326559c70 I0320 12:43:30.483080 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=f5950bc64094e4f2d9cebd124ef6ba868aaaa85244a93313c9070eb098d66aef I0320 12:43:30.483136 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=2ab2de9d51bb29d399d640bb6cd24a47f3edbc9cc3593c62663c614a607eaee6 I0320 12:43:30.483342 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-55f6d59d8b-spcwc with fingerprint=e95043944a45282349cbd37affa5d4fb8c087ffd008989875442974cfbc17512 I0320 12:43:30.483485 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-55f6d59d8b-vk9kj with fingerprint=a9ed18907007c43957b5fb2b0fa7c7526565fbccb7f7535f7f63dc389a498e43 I0320 12:43:30.483606 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-5d7bbcc7fc-9lpl2 with fingerprint=14b8642d080db9537bbbfb2c0f5b637d291cc4225bf7308f5f003012fc30b821 I0320 12:43:30.483625 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-fdmtp/migrator_current.log with fingerprint=a19c090ba8917efe9b28523bd2e4b5d094ef35baf82252608040dd0443cbb2ad I0320 12:43:30.483631 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-fdmtp/graceful-termination_current.log with fingerprint=fa56f0efbffa0f19deee0fd81d2e0422fe55b140179d2150a3266664d2134f19 I0320 12:43:30.483713 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-74848b4cb9-2x2t8/kube-storage-version-migrator-operator_current.log with fingerprint=c88d0700b06f9a4cf22ac8530c11e042760a7ec0eaa21da5639061369a385258 I0320 12:43:30.483723 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 6.101555482s to process 13 records I0320 12:43:36.110837 1 configmapobserver.go:84] configmaps "insights-config" not found I0320 12:43:36.313014 1 configmapobserver.go:84] configmaps "insights-config" not found I0320 12:43:36.463639 1 configmapobserver.go:84] configmaps "insights-config" not found I0320 12:43:37.209697 1 tasks_processing.go:74] worker 28 stopped. I0320 12:43:37.209732 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0320 12:43:37.209744 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.846050417s to process 1 records I0320 12:43:37.963445 1 tasks_processing.go:74] worker 16 stopped. I0320 12:43:37.963749 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=47b39ab864f25dd48b61886a58a6dff3dd3b9b1a6e773424ea57a4ca9b5f498a I0320 12:43:37.963766 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.60630142s to process 1 records E0320 12:43:37.963822 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.608s with: function \"machine_healthchecks\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"machines\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0320 12:43:37.964933 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0320 12:43:37.964948 1 periodic.go:209] Running workloads gatherer I0320 12:43:37.964965 1 tasks_processing.go:45] number of workers: 2 I0320 12:43:37.964971 1 tasks_processing.go:69] worker 1 listening for tasks. I0320 12:43:37.964977 1 tasks_processing.go:71] worker 1 working on workload_info task. I0320 12:43:37.964983 1 tasks_processing.go:69] worker 0 listening for tasks. I0320 12:43:37.964996 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0320 12:43:37.989391 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0320 12:43:37.993977 1 tasks_processing.go:74] worker 0 stopped. I0320 12:43:37.993991 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 28.965447ms to process 0 records I0320 12:43:37.996377 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (8ms) I0320 12:43:38.003922 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (8ms) I0320 12:43:38.010982 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (7ms) I0320 12:43:38.018214 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (7ms) I0320 12:43:38.025702 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (7ms) I0320 12:43:38.033350 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (8ms) I0320 12:43:38.040731 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (7ms) I0320 12:43:38.048212 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (7ms) I0320 12:43:38.055592 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (7ms) I0320 12:43:38.062577 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (7ms) I0320 12:43:38.096461 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (34ms) I0320 12:43:38.197816 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (101ms) I0320 12:43:38.297269 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (99ms) I0320 12:43:38.397802 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (101ms) I0320 12:43:38.497679 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (100ms) I0320 12:43:38.597798 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (100ms) I0320 12:43:38.698043 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (100ms) I0320 12:43:38.797548 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (99ms) I0320 12:43:38.897786 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (100ms) I0320 12:43:38.997037 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (99ms) I0320 12:43:39.097427 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (100ms) I0320 12:43:39.197572 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (100ms) I0320 12:43:39.296960 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (99ms) I0320 12:43:39.397665 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (101ms) I0320 12:43:39.497207 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0320 12:43:39.597691 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (100ms) I0320 12:43:39.697379 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0320 12:43:39.798601 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (101ms) I0320 12:43:39.897854 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (99ms) I0320 12:43:39.997315 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (99ms) I0320 12:43:40.098362 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (101ms) I0320 12:43:40.198248 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (100ms) I0320 12:43:40.198284 1 tasks_processing.go:74] worker 1 stopped. E0320 12:43:40.198294 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0320 12:43:40.198621 1 recorder.go:75] Recording config/workload_info with fingerprint=b29795a34cb63832e9c0413464f95fb56471de3a716798e040608945d04f09c9 I0320 12:43:40.198636 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.233300461s to process 1 records E0320 12:43:40.198663 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.233s with: function \"workload_info\" failed with an error" I0320 12:43:40.199759 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0320 12:43:40.199772 1 periodic.go:209] Running conditional gatherer I0320 12:43:40.205885 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0320 12:43:40.212239 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:59324->172.30.0.10:53: read: connection refused E0320 12:43:40.212489 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0320 12:43:40.212580 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0320 12:43:40.217854 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0320 12:43:40.217869 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217876 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217881 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217885 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217890 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217894 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217898 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217903 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0320 12:43:40.217907 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0320 12:43:40.217926 1 tasks_processing.go:45] number of workers: 3 I0320 12:43:40.217943 1 tasks_processing.go:69] worker 2 listening for tasks. I0320 12:43:40.217951 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0320 12:43:40.217950 1 tasks_processing.go:69] worker 0 listening for tasks. I0320 12:43:40.217963 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0320 12:43:40.217964 1 tasks_processing.go:69] worker 1 listening for tasks. I0320 12:43:40.217975 1 tasks_processing.go:74] worker 1 stopped. I0320 12:43:40.217976 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0320 12:43:40.218041 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0320 12:43:40.218058 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.164µs to process 1 records I0320 12:43:40.218332 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0320 12:43:40.218348 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.012µs to process 1 records I0320 12:43:40.218404 1 tasks_processing.go:74] worker 0 stopped. I0320 12:43:40.218669 1 tasks_processing.go:74] worker 2 stopped. I0320 12:43:40.218721 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 682.799µs to process 0 records I0320 12:43:40.218767 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:59324->172.30.0.10:53: read: connection refused I0320 12:43:40.218815 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0320 12:43:40.242453 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=b49419ee6a9b34d57aee018d1834196860a783e126aea26c847d4e72a013379c I0320 12:43:40.242590 1 diskrecorder.go:70] Writing 109 records to /var/lib/insights-operator/insights-2026-03-20-124340.tar.gz I0320 12:43:40.250747 1 diskrecorder.go:51] Wrote 109 records to disk in 8ms I0320 12:43:40.250773 1 periodic.go:278] Gathering cluster info every 2h0m0s I0320 12:43:40.250787 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0320 12:43:50.343688 1 configmapobserver.go:84] configmaps "insights-config" not found I0320 12:44:33.686217 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="cba213525ab2e5df58e2e79e76596fafeb6107ae5e88e32110600aafa4d12c80") W0320 12:44:33.686263 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0320 12:44:33.686326 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0320 12:44:33.686365 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0320 12:44:33.686331 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="ea9562f2363cf582eb04daacb695f81c34942431894ecd336892e98f155c7abe") I0320 12:44:33.686399 1 base_controller.go:181] Shutting down ConfigController ... I0320 12:44:33.686416 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0320 12:44:33.686598 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0320 12:44:33.686460 1 base_controller.go:181] Shutting down LoggingSyncer ... I0320 12:44:33.686472 1 periodic.go:170] Shutting down