W0427 16:15:30.174471 1 cmd.go:257] Using insecure, self-signed certificates I0427 16:15:30.585757 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:30.586061 1 observer_polling.go:159] Starting file observer I0427 16:15:31.242937 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0427 16:15:31.243244 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0427 16:15:31.244166 1 secure_serving.go:57] Forcing use of http/1.1 only W0427 16:15:31.244192 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. I0427 16:15:31.244189 1 simple_featuregate_reader.go:171] Starting feature-gate-detector W0427 16:15:31.244201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0427 16:15:31.244223 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0427 16:15:31.244227 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0427 16:15:31.244230 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0427 16:15:31.244233 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0427 16:15:31.250041 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0427 16:15:31.250068 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"a76c0bcd-10cc-4ed2-8f0b-4224169b7eb0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0427 16:15:31.250961 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0427 16:15:31.250990 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0427 16:15:31.250986 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0427 16:15:31.250989 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0427 16:15:31.251030 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0427 16:15:31.251035 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0427 16:15:31.251323 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-306982061/tls.crt::/tmp/serving-cert-306982061/tls.key" I0427 16:15:31.251610 1 secure_serving.go:213] Serving securely on [::]:8443 I0427 16:15:31.251638 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0427 16:15:31.259444 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0427 16:15:31.259470 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0427 16:15:31.259506 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0427 16:15:31.268113 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0427 16:15:31.268138 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0427 16:15:31.274145 1 secretconfigobserver.go:119] support secret does not exist I0427 16:15:31.280082 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0427 16:15:31.285944 1 secretconfigobserver.go:119] support secret does not exist I0427 16:15:31.288609 1 recorder.go:161] Pruning old reports every 7h3m37s, max age is 288h0m0s I0427 16:15:31.294917 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0427 16:15:31.294938 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0427 16:15:31.294920 1 periodic.go:209] Running clusterconfig gatherer I0427 16:15:31.294991 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0427 16:15:31.295001 1 tasks_processing.go:45] number of workers: 64 I0427 16:15:31.295006 1 insightsreport.go:296] Starting report retriever I0427 16:15:31.295015 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0427 16:15:31.295029 1 tasks_processing.go:69] worker 2 listening for tasks. I0427 16:15:31.295033 1 tasks_processing.go:69] worker 0 listening for tasks. I0427 16:15:31.295042 1 tasks_processing.go:69] worker 1 listening for tasks. I0427 16:15:31.295052 1 tasks_processing.go:69] worker 5 listening for tasks. I0427 16:15:31.295054 1 tasks_processing.go:69] worker 9 listening for tasks. I0427 16:15:31.295058 1 tasks_processing.go:69] worker 8 listening for tasks. I0427 16:15:31.295059 1 tasks_processing.go:69] worker 3 listening for tasks. I0427 16:15:31.295064 1 tasks_processing.go:69] worker 4 listening for tasks. I0427 16:15:31.295065 1 tasks_processing.go:69] worker 12 listening for tasks. I0427 16:15:31.295071 1 tasks_processing.go:69] worker 11 listening for tasks. I0427 16:15:31.295073 1 tasks_processing.go:69] worker 10 listening for tasks. I0427 16:15:31.295067 1 tasks_processing.go:69] worker 7 listening for tasks. I0427 16:15:31.295076 1 tasks_processing.go:69] worker 14 listening for tasks. I0427 16:15:31.295081 1 tasks_processing.go:69] worker 16 listening for tasks. I0427 16:15:31.295091 1 tasks_processing.go:69] worker 26 listening for tasks. I0427 16:15:31.295098 1 tasks_processing.go:69] worker 22 listening for tasks. I0427 16:15:31.295107 1 tasks_processing.go:69] worker 23 listening for tasks. I0427 16:15:31.295120 1 tasks_processing.go:69] worker 29 listening for tasks. I0427 16:15:31.295124 1 tasks_processing.go:71] worker 23 working on validating_webhook_configurations task. I0427 16:15:31.295138 1 tasks_processing.go:69] worker 15 listening for tasks. I0427 16:15:31.295143 1 tasks_processing.go:71] worker 15 working on openshift_machine_api_events task. I0427 16:15:31.295143 1 tasks_processing.go:69] worker 27 listening for tasks. I0427 16:15:31.295148 1 tasks_processing.go:69] worker 54 listening for tasks. I0427 16:15:31.295153 1 tasks_processing.go:69] worker 28 listening for tasks. I0427 16:15:31.295164 1 tasks_processing.go:69] worker 30 listening for tasks. I0427 16:15:31.295163 1 tasks_processing.go:69] worker 24 listening for tasks. I0427 16:15:31.295165 1 tasks_processing.go:69] worker 17 listening for tasks. I0427 16:15:31.295161 1 tasks_processing.go:69] worker 25 listening for tasks. I0427 16:15:31.295168 1 tasks_processing.go:69] worker 18 listening for tasks. I0427 16:15:31.295176 1 tasks_processing.go:69] worker 19 listening for tasks. I0427 16:15:31.295179 1 tasks_processing.go:69] worker 32 listening for tasks. I0427 16:15:31.295164 1 tasks_processing.go:71] worker 5 working on container_images task. I0427 16:15:31.295184 1 tasks_processing.go:69] worker 55 listening for tasks. I0427 16:15:31.295181 1 tasks_processing.go:69] worker 60 listening for tasks. I0427 16:15:31.295186 1 tasks_processing.go:69] worker 21 listening for tasks. I0427 16:15:31.295192 1 tasks_processing.go:69] worker 62 listening for tasks. I0427 16:15:31.295172 1 tasks_processing.go:69] worker 31 listening for tasks. I0427 16:15:31.295201 1 tasks_processing.go:69] worker 20 listening for tasks. I0427 16:15:31.295201 1 tasks_processing.go:71] worker 12 working on machine_config_pools task. I0427 16:15:31.295197 1 tasks_processing.go:69] worker 58 listening for tasks. I0427 16:15:31.295207 1 tasks_processing.go:69] worker 44 listening for tasks. I0427 16:15:31.295211 1 tasks_processing.go:71] worker 2 working on node_logs task. I0427 16:15:31.295213 1 tasks_processing.go:69] worker 36 listening for tasks. I0427 16:15:31.295214 1 tasks_processing.go:69] worker 35 listening for tasks. I0427 16:15:31.295218 1 tasks_processing.go:69] worker 37 listening for tasks. I0427 16:15:31.295183 1 tasks_processing.go:69] worker 6 listening for tasks. I0427 16:15:31.295223 1 tasks_processing.go:69] worker 42 listening for tasks. I0427 16:15:31.295225 1 tasks_processing.go:69] worker 43 listening for tasks. I0427 16:15:31.295226 1 tasks_processing.go:71] worker 0 working on openshift_logging task. I0427 16:15:31.295194 1 tasks_processing.go:71] worker 8 working on operators task. I0427 16:15:31.295230 1 tasks_processing.go:71] worker 1 working on clusterroles task. I0427 16:15:31.295237 1 tasks_processing.go:69] worker 49 listening for tasks. I0427 16:15:31.295239 1 tasks_processing.go:69] worker 47 listening for tasks. I0427 16:15:31.295237 1 tasks_processing.go:71] worker 10 working on networks task. I0427 16:15:31.295247 1 tasks_processing.go:69] worker 48 listening for tasks. I0427 16:15:31.295246 1 tasks_processing.go:69] worker 45 listening for tasks. I0427 16:15:31.295176 1 tasks_processing.go:69] worker 57 listening for tasks. I0427 16:15:31.295258 1 tasks_processing.go:69] worker 50 listening for tasks. I0427 16:15:31.295261 1 tasks_processing.go:69] worker 52 listening for tasks. I0427 16:15:31.295263 1 tasks_processing.go:71] worker 26 working on image_registries task. I0427 16:15:31.295185 1 tasks_processing.go:71] worker 11 working on ingress_certificates task. I0427 16:15:31.295273 1 tasks_processing.go:69] worker 53 listening for tasks. I0427 16:15:31.295280 1 tasks_processing.go:71] worker 7 working on jaegers task. I0427 16:15:31.295200 1 tasks_processing.go:69] worker 34 listening for tasks. I0427 16:15:31.295191 1 tasks_processing.go:69] worker 33 listening for tasks. I0427 16:15:31.295193 1 tasks_processing.go:69] worker 59 listening for tasks. I0427 16:15:31.295200 1 tasks_processing.go:71] worker 4 working on aggregated_monitoring_cr_names task. I0427 16:15:31.295182 1 tasks_processing.go:69] worker 56 listening for tasks. I0427 16:15:31.295452 1 tasks_processing.go:71] worker 16 working on openstack_version task. I0427 16:15:31.295466 1 tasks_processing.go:69] worker 51 listening for tasks. I0427 16:15:31.295474 1 tasks_processing.go:71] worker 22 working on lokistack task. I0427 16:15:31.295205 1 tasks_processing.go:69] worker 63 listening for tasks. I0427 16:15:31.295225 1 tasks_processing.go:69] worker 38 listening for tasks. I0427 16:15:31.295230 1 tasks_processing.go:69] worker 41 listening for tasks. I0427 16:15:31.295231 1 tasks_processing.go:71] worker 14 working on monitoring_persistent_volumes task. I0427 16:15:31.295606 1 tasks_processing.go:71] worker 41 working on container_runtime_configs task. I0427 16:15:31.295219 1 tasks_processing.go:69] worker 40 listening for tasks. I0427 16:15:31.295626 1 tasks_processing.go:71] worker 40 working on machine_configs task. I0427 16:15:31.295127 1 tasks_processing.go:71] worker 29 working on storage_cluster task. I0427 16:15:31.295643 1 tasks_processing.go:69] worker 13 listening for tasks. I0427 16:15:31.295646 1 tasks_processing.go:71] worker 58 working on overlapping_namespace_uids task. I0427 16:15:31.295653 1 tasks_processing.go:71] worker 32 working on sap_pods task. I0427 16:15:31.295656 1 tasks_processing.go:71] worker 50 working on qemu_kubevirt_launcher_logs task. I0427 16:15:31.295204 1 tasks_processing.go:71] worker 3 working on olm_operators task. I0427 16:15:31.296578 1 tasks_processing.go:71] worker 31 working on mutating_webhook_configurations task. I0427 16:15:31.295181 1 tasks_processing.go:69] worker 61 listening for tasks. I0427 16:15:31.296609 1 tasks_processing.go:71] worker 61 working on nodes task. I0427 16:15:31.295189 1 tasks_processing.go:71] worker 9 working on oauths task. I0427 16:15:31.296633 1 tasks_processing.go:71] worker 52 working on ceph_cluster task. I0427 16:15:31.295251 1 tasks_processing.go:69] worker 46 listening for tasks. I0427 16:15:31.296649 1 tasks_processing.go:71] worker 49 working on machine_autoscalers task. I0427 16:15:31.296654 1 tasks_processing.go:71] worker 46 working on config_maps task. I0427 16:15:31.295210 1 tasks_processing.go:69] worker 39 listening for tasks. I0427 16:15:31.296657 1 tasks_processing.go:71] worker 44 working on infrastructures task. I0427 16:15:31.296670 1 tasks_processing.go:71] worker 24 working on nodenetworkconfigurationpolicies task. I0427 16:15:31.296677 1 tasks_processing.go:71] worker 56 working on openstack_dataplanedeployments task. I0427 16:15:31.296701 1 tasks_processing.go:71] worker 25 working on storage_classes task. I0427 16:15:31.296711 1 tasks_processing.go:71] worker 48 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0427 16:15:31.296735 1 tasks_processing.go:71] worker 39 working on support_secret task. I0427 16:15:31.296746 1 tasks_processing.go:71] worker 17 working on authentication task. I0427 16:15:31.296753 1 tasks_processing.go:71] worker 45 working on machine_healthchecks task. I0427 16:15:31.296802 1 tasks_processing.go:71] worker 37 working on nodenetworkstates task. I0427 16:15:31.296837 1 tasks_processing.go:71] worker 36 working on pdbs task. I0427 16:15:31.296883 1 tasks_processing.go:71] worker 59 working on openstack_controlplanes task. I0427 16:15:31.296705 1 tasks_processing.go:71] worker 47 working on install_plans task. I0427 16:15:31.296850 1 tasks_processing.go:71] worker 35 working on sap_config task. I0427 16:15:31.296857 1 tasks_processing.go:71] worker 27 working on version task. I0427 16:15:31.296861 1 tasks_processing.go:71] worker 33 working on ingress task. I0427 16:15:31.296862 1 tasks_processing.go:71] worker 54 working on silenced_alerts task. W0427 16:15:31.297315 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:31.297339 1 tasks_processing.go:71] worker 54 working on proxies task. I0427 16:15:31.297707 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 39.728µs to process 0 records I0427 16:15:31.296865 1 tasks_processing.go:71] worker 53 working on crds task. I0427 16:15:31.296866 1 tasks_processing.go:71] worker 28 working on tsdb_status task. W0427 16:15:31.298120 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:31.298134 1 tasks_processing.go:74] worker 28 stopped. I0427 16:15:31.298144 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 83.439µs to process 0 records I0427 16:15:31.296869 1 tasks_processing.go:71] worker 34 working on metrics task. W0427 16:15:31.298167 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:31.296871 1 tasks_processing.go:71] worker 30 working on active_alerts task. I0427 16:15:31.296874 1 tasks_processing.go:71] worker 20 working on cluster_apiserver task. I0427 16:15:31.296877 1 tasks_processing.go:71] worker 57 working on openstack_dataplanenodesets task. I0427 16:15:31.296881 1 tasks_processing.go:71] worker 18 working on certificate_signing_requests task. I0427 16:15:31.296887 1 tasks_processing.go:71] worker 63 working on service_accounts task. I0427 16:15:31.296892 1 tasks_processing.go:71] worker 51 working on sap_datahubs task. I0427 16:15:31.296897 1 tasks_processing.go:71] worker 21 working on dvo_metrics task. I0427 16:15:31.296899 1 tasks_processing.go:71] worker 60 working on pod_network_connectivity_checks task. I0427 16:15:31.296903 1 tasks_processing.go:71] worker 42 working on image_pruners task. I0427 16:15:31.296905 1 tasks_processing.go:71] worker 55 working on machines task. I0427 16:15:31.296907 1 tasks_processing.go:71] worker 6 working on cost_management_metrics_configs task. I0427 16:15:31.296913 1 tasks_processing.go:71] worker 38 working on schedulers task. I0427 16:15:31.296914 1 tasks_processing.go:71] worker 43 working on feature_gates task. I0427 16:15:31.296921 1 tasks_processing.go:71] worker 19 working on operators_pods_and_events task. I0427 16:15:31.296931 1 tasks_processing.go:71] worker 13 working on image task. I0427 16:15:31.296939 1 tasks_processing.go:71] worker 62 working on machine_sets task. I0427 16:15:31.298215 1 tasks_processing.go:74] worker 34 stopped. I0427 16:15:31.298226 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 54.681µs to process 0 records W0427 16:15:31.298356 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:31.298612 1 tasks_processing.go:74] worker 30 stopped. I0427 16:15:31.298642 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 376.228µs to process 0 records I0427 16:15:31.305566 1 tasks_processing.go:74] worker 0 stopped. I0427 16:15:31.305584 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 10.32612ms to process 0 records I0427 16:15:31.309374 1 tasks_processing.go:74] worker 16 stopped. I0427 16:15:31.309384 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 13.904975ms to process 0 records I0427 16:15:31.310134 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0427 16:15:31.310151 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0427 16:15:31.310155 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0427 16:15:31.310158 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0427 16:15:31.310172 1 controller.go:489] The operator is still being initialized I0427 16:15:31.310180 1 controller.go:512] The operator is healthy I0427 16:15:31.318915 1 tasks_processing.go:74] worker 7 stopped. I0427 16:15:31.318924 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 23.62857ms to process 0 records I0427 16:15:31.318930 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 23.448436ms to process 0 records I0427 16:15:31.318936 1 tasks_processing.go:74] worker 22 stopped. I0427 16:15:31.323738 1 tasks_processing.go:74] worker 41 stopped. I0427 16:15:31.323751 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 28.116045ms to process 0 records I0427 16:15:31.323759 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 26.622363ms to process 0 records I0427 16:15:31.323765 1 tasks_processing.go:74] worker 35 stopped. I0427 16:15:31.323784 1 tasks_processing.go:74] worker 15 stopped. I0427 16:15:31.323794 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 28.633072ms to process 0 records I0427 16:15:31.324055 1 tasks_processing.go:74] worker 10 stopped. I0427 16:15:31.324287 1 recorder.go:75] Recording config/network with fingerprint=0a1faadfc8a9cfe192cd49a9f09b2a2b3d9ad9ba2cd3bc6fad7673be8b52a1f6 I0427 16:15:31.324302 1 gather.go:177] gatherer "clusterconfig" function "networks" took 28.797761ms to process 1 records I0427 16:15:31.324393 1 tasks_processing.go:74] worker 44 stopped. I0427 16:15:31.324890 1 recorder.go:75] Recording config/infrastructure with fingerprint=4143e0b0e6d5a4feda331d5f00314093ab715fa1fc093fe55ab2d3c3d577c5b7 I0427 16:15:31.324903 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 27.631495ms to process 1 records I0427 16:15:31.324909 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 26.247289ms to process 0 records I0427 16:15:31.324914 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 29.028855ms to process 0 records I0427 16:15:31.324918 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 29.633006ms to process 0 records I0427 16:15:31.324933 1 tasks_processing.go:74] worker 32 stopped. I0427 16:15:31.324941 1 tasks_processing.go:74] worker 51 stopped. I0427 16:15:31.324945 1 tasks_processing.go:74] worker 12 stopped. I0427 16:15:31.324960 1 recorder.go:75] Recording config/proxy with fingerprint=a3e910f47d80caa1d8f7db55c729008b0bb4682463b7cfdea764c035bf121d8d I0427 16:15:31.324984 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 27.343723ms to process 1 records I0427 16:15:31.324985 1 tasks_processing.go:74] worker 54 stopped. I0427 16:15:31.324988 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 28.203689ms to process 0 records I0427 16:15:31.324993 1 tasks_processing.go:74] worker 24 stopped. I0427 16:15:31.325060 1 tasks_processing.go:74] worker 9 stopped. I0427 16:15:31.325186 1 recorder.go:75] Recording config/oauth with fingerprint=49c77256e769f4c9dbcfb6f77dfa3b2fe23ef0faa160cad54905c78713c9683b I0427 16:15:31.325198 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 28.311152ms to process 1 records I0427 16:15:31.330022 1 tasks_processing.go:74] worker 37 stopped. I0427 16:15:31.330032 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 33.206825ms to process 0 records I0427 16:15:31.330042 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 34.382825ms to process 0 records I0427 16:15:31.330046 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 31.500027ms to process 0 records I0427 16:15:31.330051 1 tasks_processing.go:74] worker 57 stopped. I0427 16:15:31.330054 1 tasks_processing.go:74] worker 29 stopped. I0427 16:15:31.330089 1 tasks_processing.go:74] worker 39 stopped. E0427 16:15:31.330104 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0427 16:15:31.330115 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 33.336882ms to process 0 records I0427 16:15:31.330134 1 tasks_processing.go:74] worker 45 stopped. E0427 16:15:31.330143 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0427 16:15:31.330151 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 33.369202ms to process 0 records I0427 16:15:31.330225 1 tasks_processing.go:74] worker 23 stopped. I0427 16:15:31.330404 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=844c12a763cc813f10771c8809e1ec4ee6ced68c52124fc547af0d44e8ca804f I0427 16:15:31.330494 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=19ebb449bd27940d504ee86a7e2342ec2bf7a1aa77ceda6af9687118e17bc1e6 I0427 16:15:31.330527 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=e0a1bdacba5503cc55b4373f56b31f5d7dcfd03223226f35cb0cdc90d3bfd4c1 I0427 16:15:31.330571 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=2a2d396e0af5014a7b35ad57de9fed7a5ffd141e376507c38014baf37a2084a3 I0427 16:15:31.330614 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=14c699767cc87adfc6501a8c866438bb6616873d365e1391a3d002170fa5aa00 I0427 16:15:31.330658 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=ba23fcf62bbddc147ebc2e34d1042de58e6ba298c59ea959773c181d6a63ce84 I0427 16:15:31.330699 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=69dab557388b99b5dfa94fa3ba55b563b302a09b4abf2e046ec5eeb6671bbb4c I0427 16:15:31.330747 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=e8e76b8648ce077c69e238d81b40122e9fb8770c6981ac4bc356b565634febf8 I0427 16:15:31.330802 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=973459bf0d5546577ef641275b134ab4712be6e54545369ea53b5f0c0cf57d16 I0427 16:15:31.330843 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=0c585576353eecd83bb6b41c9b699b6d679559a6ef295f3009f3c44b33c99ea5 I0427 16:15:31.330883 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=d4a5d8f5551c1da54d80d57e0ad1bcf314b8243773899106e453ec4f31a33f40 I0427 16:15:31.330898 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 35.083082ms to process 11 records I0427 16:15:31.330909 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 34.021142ms to process 0 records I0427 16:15:31.330916 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 34.217001ms to process 0 records I0427 16:15:31.330984 1 tasks_processing.go:74] worker 56 stopped. I0427 16:15:31.330998 1 tasks_processing.go:74] worker 49 stopped. I0427 16:15:31.331068 1 recorder.go:75] Recording config/apiserver with fingerprint=c14800576928fab555261f998089665b3763bb2e714a51ea4740c617c80f4539 I0427 16:15:31.331080 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 32.495345ms to process 1 records I0427 16:15:31.331089 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 34.049916ms to process 0 records I0427 16:15:31.331096 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 34.089927ms to process 0 records I0427 16:15:31.331102 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 35.488551ms to process 0 records I0427 16:15:31.331106 1 tasks_processing.go:74] worker 59 stopped. I0427 16:15:31.331115 1 tasks_processing.go:74] worker 20 stopped. I0427 16:15:31.331120 1 tasks_processing.go:74] worker 52 stopped. I0427 16:15:31.331121 1 tasks_processing.go:74] worker 14 stopped. I0427 16:15:31.331198 1 tasks_processing.go:74] worker 33 stopped. I0427 16:15:31.331216 1 recorder.go:75] Recording config/ingress with fingerprint=7d9fe97f10fa12feb79490f92846925811b9a111d8a43efe884ec58bf46cb5f9 I0427 16:15:31.331225 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 33.894438ms to process 1 records I0427 16:15:31.331368 1 tasks_processing.go:74] worker 17 stopped. I0427 16:15:31.331601 1 recorder.go:75] Recording config/authentication with fingerprint=3f2e8bf9daa52ab52481c148d145d29ea5e0b5b5deb9f9dfd14179806c7ffcc1 I0427 16:15:31.331618 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 34.606215ms to process 1 records I0427 16:15:31.331706 1 tasks_processing.go:74] worker 26 stopped. I0427 16:15:31.332259 1 gather_logs.go:145] no pods in namespace were found I0427 16:15:31.332314 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=269f0b8019ee10834ee9cbdd5c9607c8e1ca141f370df494ed55c03c41d4a332 I0427 16:15:31.332330 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 36.263417ms to process 1 records I0427 16:15:31.332338 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 36.607351ms to process 0 records I0427 16:15:31.332346 1 tasks_processing.go:74] worker 50 stopped. I0427 16:15:31.332377 1 tasks_processing.go:74] worker 38 stopped. I0427 16:15:31.332440 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=75ade3a64e05cf137df54310c14367a41c33fad1d44bcc19f698d91a723831f1 I0427 16:15:31.332456 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 33.701268ms to process 1 records I0427 16:15:31.332903 1 tasks_processing.go:74] worker 60 stopped. E0427 16:15:31.332963 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0427 16:15:31.332995 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 34.117824ms to process 0 records I0427 16:15:31.333115 1 tasks_processing.go:74] worker 43 stopped. I0427 16:15:31.333179 1 recorder.go:75] Recording config/featuregate with fingerprint=50426d73018bdf344c4d057dd70452cb6f4a7ac2a4861991bf20bcf811e906ca I0427 16:15:31.333189 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 34.316389ms to process 1 records I0427 16:15:31.333329 1 tasks_processing.go:74] worker 6 stopped. I0427 16:15:31.333345 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 34.658443ms to process 0 records I0427 16:15:31.333681 1 tasks_processing.go:74] worker 3 stopped. I0427 16:15:31.333702 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 37.886144ms to process 0 records I0427 16:15:31.334331 1 tasks_processing.go:74] worker 55 stopped. E0427 16:15:31.334347 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0427 16:15:31.334359 1 gather.go:177] gatherer "clusterconfig" function "machines" took 35.491804ms to process 0 records I0427 16:15:31.338607 1 tasks_processing.go:74] worker 31 stopped. I0427 16:15:31.338770 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=ec8eecd25b248e70b470f232c4c3cd80a61998914c4184e8267aeb867452e7f3 I0427 16:15:31.338826 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=2711983159e8447faf9e2f5617c19ce19d34413ee2a543999d972024506d0436 I0427 16:15:31.338876 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=1e084de5a3a12c7678ff7574a4d907c61c2c0ee97dfdd99064639a6556e09a0c I0427 16:15:31.338887 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 42.010744ms to process 3 records I0427 16:15:31.338900 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 43.407137ms to process 0 records I0427 16:15:31.338913 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 39.829433ms to process 0 records I0427 16:15:31.338918 1 tasks_processing.go:74] worker 2 stopped. I0427 16:15:31.338927 1 tasks_processing.go:74] worker 62 stopped. I0427 16:15:31.338985 1 tasks_processing.go:74] worker 13 stopped. I0427 16:15:31.339030 1 recorder.go:75] Recording config/image with fingerprint=7db59de6500fe5d4c063234064901c429f05dac9c64d668ad4d2e7c44ffbb154 I0427 16:15:31.339040 1 gather.go:177] gatherer "clusterconfig" function "image" took 40.024096ms to process 1 records I0427 16:15:31.339172 1 tasks_processing.go:74] worker 58 stopped. I0427 16:15:31.339197 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0427 16:15:31.339207 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 43.517017ms to process 1 records I0427 16:15:31.344387 1 tasks_processing.go:74] worker 36 stopped. I0427 16:15:31.344469 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=d04e4f0755e4bff422d12f28a35e1f55fb73bde6e2bcb3d48ea34886f6a385f0 I0427 16:15:31.344489 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=49563547dbd5f7764ca48d4555029cab53a5f8c38d68097a0239a33ecc4ecf5e I0427 16:15:31.344506 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=dd8440c6040d3fd51d01161f301b8557e7e615ac422cbf0747ab3363a7639cc9 I0427 16:15:31.344518 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 47.498212ms to process 3 records I0427 16:15:31.344624 1 tasks_processing.go:74] worker 53 stopped. I0427 16:15:31.345350 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=9d1ef999d1d99ebf9e7a4617a7c8769dd99ee27bcdd788cfb4bfc6671262da2c I0427 16:15:31.345687 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=f594d455ca36cd2a10989c08896f2e75aa54ad5fba2cc94cbfc262fccfd07574 I0427 16:15:31.345701 1 gather.go:177] gatherer "clusterconfig" function "crds" took 46.84671ms to process 2 records I0427 16:15:31.345782 1 tasks_processing.go:74] worker 42 stopped. I0427 16:15:31.345804 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=b1fe90e553e01741448ba6dba01f9a00801ab55bd721ca8ff953a0c5ef15d408 I0427 16:15:31.345814 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 45.861393ms to process 1 records I0427 16:15:31.345886 1 tasks_processing.go:74] worker 61 stopped. I0427 16:15:31.346214 1 recorder.go:75] Recording config/node/ip-10-0-0-46.ec2.internal with fingerprint=ac1a698aaf62a551b8e53986590a572e957fbb76f0601e649c3dcdda9de6f411 I0427 16:15:31.346276 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0427 16:15:31.346282 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0427 16:15:31.346307 1 recorder.go:75] Recording config/node/ip-10-0-1-184.ec2.internal with fingerprint=638997a9cb637870830fceea7729c1958f95d4034c406886639e93a5ce6503b6 I0427 16:15:31.346390 1 recorder.go:75] Recording config/node/ip-10-0-2-136.ec2.internal with fingerprint=7d18d25cb460d292906a3dc9e89b6cc2a8bd3b09a8336e4f7d32420fb0f6ab89 I0427 16:15:31.346401 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 48.11404ms to process 3 records W0427 16:15:31.346664 1 operator.go:288] started I0427 16:15:31.346744 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0427 16:15:31.349370 1 tasks_processing.go:74] worker 25 stopped. I0427 16:15:31.349460 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=a0699edda96265e55f77be24da508bd95e2a65af0c66dc9130d48f8884e5f74d I0427 16:15:31.349479 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=3b4a72fcabe24e6a934aa1ffdc4ab6df1d324ec1e1b97140ce82c39b5a648b02 I0427 16:15:31.349485 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 52.651598ms to process 2 records I0427 16:15:31.349492 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 54.087318ms to process 0 records I0427 16:15:31.349497 1 tasks_processing.go:74] worker 4 stopped. I0427 16:15:31.351024 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0427 16:15:31.351073 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0427 16:15:31.351113 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0427 16:15:31.352252 1 tasks_processing.go:74] worker 18 stopped. I0427 16:15:31.352270 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 53.881812ms to process 0 records W0427 16:15:31.353762 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0427 16:15:31.359541 1 base_controller.go:82] Caches are synced for ConfigController I0427 16:15:31.359562 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0427 16:15:31.360598 1 tasks_processing.go:74] worker 40 stopped. I0427 16:15:31.360629 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0427 16:15:31.360638 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 64.944525ms to process 1 records I0427 16:15:31.360781 1 tasks_processing.go:74] worker 1 stopped. I0427 16:15:31.360945 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=ecfd79fa3bbfbcd9a81fd7cb9dd3ada0d6dd664ef95fbcb88eb35b22890c215f I0427 16:15:31.361046 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=8358f039d1b4467c8f1a3e9da8532f4441dad1a69be28164cfbfa587fff8fd07 I0427 16:15:31.361056 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 65.535209ms to process 2 records I0427 16:15:31.362752 1 tasks_processing.go:74] worker 5 stopped. I0427 16:15:31.364009 1 recorder.go:75] Recording config/pod/openshift-multus/multus-6gjsz with fingerprint=3b23d91f68e8cf8f7e22583a9191c3b4d6f49019fca6ba7b868c288b741c7e59 I0427 16:15:31.364111 1 recorder.go:75] Recording config/pod/openshift-multus/multus-7x265 with fingerprint=5a7efd5706c1b979a9687b696eff1e771a93dc5d71244c3505c59ea4119da6a0 I0427 16:15:31.364208 1 recorder.go:75] Recording config/pod/openshift-multus/multus-gswlm with fingerprint=0a80524693195293a7550786551d98b39c179be912f0939a844066226e18a4fe I0427 16:15:31.364438 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9 with fingerprint=74288eb29d48685af131edf2e0ec624dfd43aae07f708a22f78a399cdd2d0c12 I0427 16:15:31.364651 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7 with fingerprint=f2575dd1b857defb5cb14873022fd16d4a6d7b56fd82910d73f3022e021a2771 I0427 16:15:31.364699 1 recorder.go:75] Recording config/running_containers with fingerprint=8e812c73238404b9d19be7b759ba7179ef92d1712e3551cdfcdf6d94927627fe I0427 16:15:31.364708 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 67.552737ms to process 6 records I0427 16:15:31.370306 1 configmapobserver.go:84] configmaps "insights-config" not found I0427 16:15:31.370456 1 prometheus_rules.go:88] Prometheus rules successfully created I0427 16:15:31.370778 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0427 16:15:31.370787 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0427 16:15:31.370790 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0427 16:15:31.370793 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0427 16:15:31.370797 1 controller.go:212] Source scaController *sca.Controller is not ready I0427 16:15:31.370812 1 controller.go:489] The operator is still being initialized I0427 16:15:31.370817 1 controller.go:512] The operator is healthy I0427 16:15:31.372592 1 tasks_processing.go:74] worker 48 stopped. I0427 16:15:31.372605 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 75.870909ms to process 0 records E0427 16:15:31.378594 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27d1f6d41b-b6b8-4ad8-839b-fb58abc045f1%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.6:39403->172.30.0.10:53: read: connection refused I0427 16:15:31.378607 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27d1f6d41b-b6b8-4ad8-839b-fb58abc045f1%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.6:39403->172.30.0.10:53: read: connection refused I0427 16:15:31.395508 1 tasks_processing.go:74] worker 11 stopped. E0427 16:15:31.395523 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0427 16:15:31.395534 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2pujfmsfl53b5g7k4p2o3ab2nokd74cv-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2pujfmsfl53b5g7k4p2o3ab2nokd74cv-primary-cert-bundle-secret" not found I0427 16:15:31.395581 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=8b790fa2e15797bd0b6616056a100d589175c5584ab96046fda5727edd715bd7 I0427 16:15:31.395599 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 100.224742ms to process 1 records I0427 16:15:31.404285 1 tasks_processing.go:74] worker 46 stopped. E0427 16:15:31.404285 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0427 16:15:31.404308 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0427 16:15:31.404317 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0427 16:15:31.404330 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0427 16:15:31.404358 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0427 16:15:31.404366 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0427 16:15:31.404381 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0427 16:15:31.404391 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0427 16:15:31.404447 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0427 16:15:31.404499 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0427 16:15:31.404512 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 107.60816ms to process 7 records I0427 16:15:31.407305 1 tasks_processing.go:74] worker 27 stopped. I0427 16:15:31.407717 1 recorder.go:75] Recording config/version with fingerprint=e88bb1536857d18fd6ab2f39f8d6f199c59931a8b1af23c6872e29c400ebd874 I0427 16:15:31.407744 1 recorder.go:75] Recording config/id with fingerprint=cc985f960e37bc21ecd3c16b09f8dff15a806c129606e2737306c84ecc14bee7 I0427 16:15:31.407771 1 gather.go:177] gatherer "clusterconfig" function "version" took 110.091837ms to process 2 records I0427 16:15:31.442910 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0427 16:15:31.446088 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.6:48070->172.30.0.10:53: read: connection refused I0427 16:15:31.446100 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.6:48070->172.30.0.10:53: read: connection refused I0427 16:15:31.446909 1 base_controller.go:82] Caches are synced for LoggingSyncer I0427 16:15:31.446923 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... W0427 16:15:32.353244 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0427 16:15:32.781009 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found W0427 16:15:33.353778 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0427 16:15:33.382789 1 tasks_processing.go:74] worker 8 stopped. I0427 16:15:33.382834 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=c983522af3279363d846b4d4c8b6b3dd0fde827d3974042b5ad65d21c52ef219 I0427 16:15:33.382884 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=8a696bd17fffc1a916d10310f79466ed13681f9fac85b315a7fba7358d7f852a I0427 16:15:33.382910 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0427 16:15:33.382936 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=0e2fd54a8d4e1311db408426e952951926e31fda1a5ae2808ae8b13757f203f1 I0427 16:15:33.382952 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0427 16:15:33.382988 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=36acc08448c7811ae52ecf3ae95614edeb901da6c60a6beb1a669bcedb10d849 I0427 16:15:33.383023 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=c8dea105362991458f519b7b9494a7fad8a7649e2f50f73e0e2c7d364136c6f3 I0427 16:15:33.383047 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=a360b91c2bb024431d8d721d013a432719bd1d2fe07250c4def12ec018a97526 I0427 16:15:33.383065 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=265dc9abe9486078162e9f31202edc9fbcbf6f587cf166832ccbe810cedc1652 I0427 16:15:33.383084 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=4e22bc6208779eecd5fb10691c7e63e6ba9e9331cc215fcd30a34f6bf20a9278 I0427 16:15:33.383093 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0427 16:15:33.383108 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=947ff480e8610472f0a9d6b21db13bad5ca9d7027dab3407eaa66c811a930f0c I0427 16:15:33.383119 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0427 16:15:33.383135 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=b1aa1c884030474be6610fb445d00f5db449a0029e5ba7b3f3c42db913632e01 I0427 16:15:33.383143 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0427 16:15:33.383155 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=e2de0a0017d08985a9c509dee9bc0de282134fc2b7148555ab6d0cfd96b522c4 I0427 16:15:33.383162 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0427 16:15:33.383178 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=9a116a4b93d543ef62ac3b9222c73d3c9f37ffb2fc9b68306e2155f27de52c69 I0427 16:15:33.383301 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=556fa229dbdbb9d61f6dfd39a6e9a10c6e78d353de8bd5bff8e5e3956d87ae0e I0427 16:15:33.383310 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0427 16:15:33.383317 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0427 16:15:33.383338 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0427 16:15:33.383359 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=d314ca68878faf511ba954d880fc6a2079755c57640952078c10af5feb5043fc I0427 16:15:33.383382 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=e7d7da390c20f60823e05880113d38ad2776fd44616939b2e86d7729320f182b I0427 16:15:33.383392 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0427 16:15:33.383409 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=3832b3cce3b2c1469b0da097cb3fdd806c9ae4962a2a38590efc70d80d1fd501 I0427 16:15:33.383418 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0427 16:15:33.383430 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=717ae3f1f7b4997466d51b1d668b051b2503c910e341d8e32887ba2d6638cd8e I0427 16:15:33.383443 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=3455676fd8f6c99c68b14dbe6364d48f31e6633583a37aa88bcf069bdfebc992 I0427 16:15:33.383459 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=ed1358b530303531e6ec00cd1681beae774ac84b353ca8e442f5c547618be540 I0427 16:15:33.383474 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=8cf4f9f4eecdd06356d0ccc05da86678cde13cd694da2309bfd9d3e227865e45 I0427 16:15:33.383494 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=4939f8a08bccb1dd8cd995b6c84cfd8b41ad1f77518b5f19bb2900d67fb8cc06 I0427 16:15:33.383503 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0427 16:15:33.383530 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=e2d22c2d579bdbae15dd68fba72f9e000c975d0cdf294e15e3721d0b0f52bcfb I0427 16:15:33.383545 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0427 16:15:33.383553 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0427 16:15:33.383560 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.087543258s to process 36 records W0427 16:15:34.353248 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0427 16:15:34.956267 1 gather_cluster_operator_pods_and_events.go:121] Found 35 pods with 78 containers I0427 16:15:34.956281 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 322638 bytes I0427 16:15:34.956444 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-5v7gf pod in namespace openshift-dns (previous: false). I0427 16:15:35.184630 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-5v7gf pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-5v7gf\" is waiting to start: ContainerCreating" I0427 16:15:35.184646 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-5v7gf\" is waiting to start: ContainerCreating" I0427 16:15:35.184654 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-5v7gf pod in namespace openshift-dns (previous: false). W0427 16:15:35.355054 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0427 16:15:35.370328 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-5v7gf pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-5v7gf\" is waiting to start: ContainerCreating" I0427 16:15:35.370345 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-5v7gf\" is waiting to start: ContainerCreating" I0427 16:15:35.370357 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-4pnbp pod in namespace openshift-dns (previous: false). I0427 16:15:35.564032 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:35.564052 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-qq8tk pod in namespace openshift-dns (previous: false). I0427 16:15:35.782168 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:35.782185 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-s4b4s pod in namespace openshift-dns (previous: false). I0427 16:15:36.022735 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:36.022828 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-559586dcb5-vwrqm pod in namespace openshift-image-registry (previous: false). I0427 16:15:36.163640 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:36.163691 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7fb4cb8694-8np7m pod in namespace openshift-image-registry (previous: false). W0427 16:15:36.353235 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0427 16:15:36.353260 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0427 16:15:36.353278 1 tasks_processing.go:74] worker 21 stopped. E0427 16:15:36.353290 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0427 16:15:36.353305 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0427 16:15:36.353325 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0427 16:15:36.353340 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.054452996s to process 1 records I0427 16:15:36.356251 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:36.356305 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7fb4cb8694-kql8m pod in namespace openshift-image-registry (previous: false). I0427 16:15:36.560729 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7fb4cb8694-kql8m pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7fb4cb8694-kql8m\" is waiting to start: ContainerCreating" I0427 16:15:36.560743 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7fb4cb8694-kql8m\" is waiting to start: ContainerCreating" I0427 16:15:36.560752 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-428ck pod in namespace openshift-image-registry (previous: false). I0427 16:15:36.761559 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:36.761573 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-6424v pod in namespace openshift-image-registry (previous: false). I0427 16:15:36.961101 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:36.961115 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-99r95 pod in namespace openshift-image-registry (previous: false). I0427 16:15:37.161951 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:37.162013 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-68d758b845-m72wn pod in namespace openshift-ingress (previous: false). I0427 16:15:37.360148 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-68d758b845-m72wn pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-68d758b845-m72wn\" is waiting to start: ContainerCreating" I0427 16:15:37.360162 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-68d758b845-m72wn\" is waiting to start: ContainerCreating" I0427 16:15:37.360190 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7b96bdc6c7-5kn6c pod in namespace openshift-ingress (previous: false). I0427 16:15:37.560675 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7b96bdc6c7-5kn6c pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7b96bdc6c7-5kn6c\" is waiting to start: ContainerCreating" I0427 16:15:37.560689 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7b96bdc6c7-5kn6c\" is waiting to start: ContainerCreating" I0427 16:15:37.560736 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7b96bdc6c7-gdfhn pod in namespace openshift-ingress (previous: false). I0427 16:15:37.755929 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:37.755944 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-rgcxp pod in namespace openshift-ingress-canary (previous: false). I0427 16:15:37.960291 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-rgcxp pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-rgcxp\" is waiting to start: ContainerCreating" I0427 16:15:37.960305 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-rgcxp\" is waiting to start: ContainerCreating" I0427 16:15:37.960334 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-6gjsz pod in namespace openshift-multus (previous: true). I0427 16:15:38.161622 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-6gjsz pod in namespace openshift-multus (previous: false). I0427 16:15:38.361541 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-7x265 pod in namespace openshift-multus (previous: true). I0427 16:15:38.562253 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-7x265 pod in namespace openshift-multus (previous: false). I0427 16:15:38.762556 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:38.960880 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:39.160412 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:39.361508 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:39.560842 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:39.762299 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:39.961110 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-87c64 pod in namespace openshift-multus (previous: false). I0427 16:15:40.160541 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:40.160560 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:40.361738 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:40.561839 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:40.761145 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:40.961303 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:41.162309 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:41.361660 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-8sxrc pod in namespace openshift-multus (previous: false). I0427 16:15:41.561581 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:41.561604 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:41.761758 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:41.962633 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:42.162651 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:42.364798 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:42.561825 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:42.765921 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-9gvrd pod in namespace openshift-multus (previous: false). I0427 16:15:42.961960 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0427 16:15:42.962012 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-gswlm pod in namespace openshift-multus (previous: true). I0427 16:15:43.160952 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-gswlm pod in namespace openshift-multus (previous: false). I0427 16:15:43.370663 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-mxqv4 pod in namespace openshift-multus (previous: false). I0427 16:15:43.566256 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-mxqv4 pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-mxqv4\" is waiting to start: ContainerCreating" I0427 16:15:43.566282 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-mxqv4\" is waiting to start: ContainerCreating" I0427 16:15:43.566339 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-mxqv4 pod in namespace openshift-multus (previous: false). I0427 16:15:43.746634 1 tasks_processing.go:74] worker 47 stopped. I0427 16:15:43.746727 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0427 16:15:43.746876 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.449583103s to process 1 records I0427 16:15:43.759939 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-mxqv4 pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-mxqv4\" is waiting to start: ContainerCreating" I0427 16:15:43.759954 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-mxqv4\" is waiting to start: ContainerCreating" I0427 16:15:43.759999 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-v757n pod in namespace openshift-multus (previous: false). I0427 16:15:43.963009 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-v757n pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-v757n\" is waiting to start: ContainerCreating" I0427 16:15:43.963027 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-v757n\" is waiting to start: ContainerCreating" I0427 16:15:43.963035 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-v757n pod in namespace openshift-multus (previous: false). I0427 16:15:44.161197 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-v757n pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-v757n\" is waiting to start: ContainerCreating" I0427 16:15:44.161214 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-v757n\" is waiting to start: ContainerCreating" I0427 16:15:44.161245 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-w7pl5 pod in namespace openshift-multus (previous: false). I0427 16:15:44.208615 1 configmapobserver.go:84] configmaps "insights-config" not found I0427 16:15:44.361192 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-w7pl5 pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-w7pl5\" is waiting to start: ContainerCreating" I0427 16:15:44.361205 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-w7pl5\" is waiting to start: ContainerCreating" I0427 16:15:44.361213 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-w7pl5 pod in namespace openshift-multus (previous: false). I0427 16:15:44.409884 1 configmapobserver.go:84] configmaps "insights-config" not found I0427 16:15:44.504686 1 configmapobserver.go:84] configmaps "insights-config" not found I0427 16:15:44.508236 1 tasks_processing.go:74] worker 63 stopped. I0427 16:15:44.508504 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=6f02bbed9776fbc53f719d04062abe458034f9ac95b2963e439b1381b6c8e3c4 I0427 16:15:44.508525 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.20957986s to process 1 records I0427 16:15:44.562527 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-w7pl5 pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-w7pl5\" is waiting to start: ContainerCreating" I0427 16:15:44.562543 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-w7pl5\" is waiting to start: ContainerCreating" I0427 16:15:44.562583 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:44.761982 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator ovn-controller (previous: true): "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:44.762002 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:44.762014 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:44.961789 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator ovn-acl-logging (previous: true): "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:44.961806 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:44.961818 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:45.161062 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-node (previous: true): "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.161077 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.161090 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:45.361124 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-ovn-metrics (previous: true): "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.361140 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.361152 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:45.560829 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator northd (previous: true): "previous terminated container \"northd\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.560844 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"northd\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.560853 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:45.761723 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator nbdb (previous: true): "previous terminated container \"nbdb\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.761737 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"nbdb\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.761745 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:45.961615 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes for failing operator sbdb (previous: true): "previous terminated container \"sbdb\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.961629 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"sbdb\" in pod \"ovnkube-node-frxv9\" not found" I0427 16:15:45.961638 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:46.164076 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:46.363665 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:46.565501 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:46.764863 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:46.965192 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:47.162276 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:47.362420 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:47.562120 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-frxv9 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:47.763668 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:47.964824 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:48.165440 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:48.363360 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:48.563676 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:48.762277 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:48.961537 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:49.161917 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-pdllw pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:49.363636 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:49.561706 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator ovn-controller (previous: true): "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.561721 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.561733 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:49.761875 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator ovn-acl-logging (previous: true): "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.761890 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.761898 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:49.962392 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-node (previous: true): "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.962406 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:49.962415 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:50.161678 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-ovn-metrics (previous: true): "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.161691 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.161700 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:50.365753 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator northd (previous: true): "previous terminated container \"northd\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.365766 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"northd\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.365775 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:50.561714 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator nbdb (previous: true): "previous terminated container \"nbdb\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.561726 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"nbdb\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.561734 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:50.763472 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes for failing operator sbdb (previous: true): "previous terminated container \"sbdb\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.763485 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"sbdb\" in pod \"ovnkube-node-z88g7\" not found" I0427 16:15:50.763493 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: true). I0427 16:15:50.963911 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:51.164554 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:51.363579 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:51.565390 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:51.764099 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:51.962292 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:52.162328 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:52.363840 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-z88g7 pod in namespace openshift-ovn-kubernetes (previous: false). I0427 16:15:52.564321 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for check-endpoints container network-check-source-6b8cd5b79b-mpprf pod in namespace openshift-network-diagnostics (previous: false). I0427 16:15:52.761590 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-6m6gk pod in namespace openshift-network-diagnostics (previous: false). I0427 16:15:52.961135 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-6m6gk pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-6m6gk\" is waiting to start: ContainerCreating" I0427 16:15:52.961148 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-6m6gk\" is waiting to start: ContainerCreating" I0427 16:15:52.961176 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-kj26v pod in namespace openshift-network-diagnostics (previous: false). I0427 16:15:53.159933 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-kj26v pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-kj26v\" is waiting to start: ContainerCreating" I0427 16:15:53.159950 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-kj26v\" is waiting to start: ContainerCreating" I0427 16:15:53.159991 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-xlmvm pod in namespace openshift-network-diagnostics (previous: false). I0427 16:15:53.361615 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-xlmvm pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-xlmvm\" is waiting to start: ContainerCreating" I0427 16:15:53.361633 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-xlmvm\" is waiting to start: ContainerCreating" I0427 16:15:53.361661 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-6577r pod in namespace openshift-network-console (previous: false). I0427 16:15:53.561042 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-6577r pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-6577r\" is waiting to start: ContainerCreating" I0427 16:15:53.561059 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-6577r\" is waiting to start: ContainerCreating" I0427 16:15:53.561088 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-wcwwv pod in namespace openshift-network-console (previous: false). I0427 16:15:53.760910 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-wcwwv pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-wcwwv\" is waiting to start: ContainerCreating" I0427 16:15:53.760925 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-wcwwv\" is waiting to start: ContainerCreating" I0427 16:15:53.760936 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-g55hx pod in namespace openshift-network-operator (previous: false). I0427 16:15:53.961337 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-j7s86 pod in namespace openshift-network-operator (previous: false). I0427 16:15:54.160829 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-vmcwr pod in namespace openshift-network-operator (previous: false). I0427 16:15:54.361374 1 tasks_processing.go:74] worker 19 stopped. I0427 16:15:54.361458 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=85b7c1795f091653f36a28901b0aa54afe5ec26a36f0babd3470419a7dca872c I0427 16:15:54.361499 1 recorder.go:75] Recording events/openshift-dns with fingerprint=23aeb5bc38ac06ad48a4ab50b4e4aadb1ba65834f31af8b7cd6d6fd31b639890 I0427 16:15:54.361576 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=2aacbde4a099d09f41079835b5fe163d0970b841f1f37db316447c1ca6e51156 I0427 16:15:54.361603 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=67983172ca5396f32a92be26b8e1a8cfaa253a8b059cb2cf2dbfa3fd974da21c I0427 16:15:54.361645 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=1e7352adff24f34721c7a733e85867dcf2af7ac48a99cb9c349b254fbfbf6ea5 I0427 16:15:54.361656 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=2e83c97580877657b4586160fab7130f2e910571439d8961aa56f44514e7ee29 I0427 16:15:54.361839 1 recorder.go:75] Recording events/openshift-multus with fingerprint=f97a3a544c6f3c4b2cd89d2ed8a95b94d9142d6baedbd7e7a81b547b347da540 I0427 16:15:54.361962 1 recorder.go:75] Recording events/openshift-ovn-kubernetes with fingerprint=5530f5a3bb56af173b3c969bfbec88675502a86b113860fd2f3c0d2cd96f7db0 I0427 16:15:54.362022 1 recorder.go:75] Recording events/openshift-network-diagnostics with fingerprint=6119a9708c233b4a429fa75d13dbf774dcd5ec58ab1cc72953c2a4f21b5fc657 I0427 16:15:54.362030 1 recorder.go:75] Recording events/openshift-network-node-identity with fingerprint=267a6cba7aaab250561a4be8b267f4cdc9c010735fc47bba6d6235166c436278 I0427 16:15:54.362047 1 recorder.go:75] Recording events/openshift-network-console with fingerprint=c98e9f10b45561faf15e7d15f54db4666b2971c05f8514d758f545ababfa0abd I0427 16:15:54.362097 1 recorder.go:75] Recording events/openshift-network-operator with fingerprint=f980cd94935a926671b0a808c852c505c22520564bc1e62f07c615490ee363f7 I0427 16:15:54.362234 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-559586dcb5-vwrqm with fingerprint=7185cd032edd9e2a903021350df3bb3a34944039e12cb31be6d41b50229f205a I0427 16:15:54.362321 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7fb4cb8694-8np7m with fingerprint=e1c23c641353587fa2cb7bd7947c75a11e39a7e7abc44042b971f7438276c944 I0427 16:15:54.362416 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7fb4cb8694-kql8m with fingerprint=643329a28d02822180c19b1eaa3a02a5ff027e055c02d964d3645c77ab6b62dc I0427 16:15:54.362514 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-68d758b845-m72wn with fingerprint=cf82560033720c18c804fe26fe38bf371bd29cdf8c2383ea1a090c5f02cd90f6 I0427 16:15:54.362600 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-7b96bdc6c7-5kn6c with fingerprint=1bf0b917a0acf4e15ac48f8037be70d617228fd78dbfa83ff11e00a508f7d1a5 I0427 16:15:54.362669 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-7b96bdc6c7-gdfhn with fingerprint=a13172742bb9f6c4540151b60ddfa3ae3676b12eaa16727cd6bb7242b3395b48 I0427 16:15:54.362785 1 recorder.go:75] Recording config/pod/openshift-multus/multus-6gjsz with fingerprint=3b23d91f68e8cf8f7e22583a9191c3b4d6f49019fca6ba7b868c288b741c7e59 E0427 16:15:54.362801 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-6gjsz.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-6gjsz.json" was already recorded and had the fingerprint "3b23d91f68e8cf8f7e22583a9191c3b4d6f49019fca6ba7b868c288b741c7e59", overwriting with the record having fingerprint "3b23d91f68e8cf8f7e22583a9191c3b4d6f49019fca6ba7b868c288b741c7e59" W0427 16:15:54.362812 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-6gjsz.json" because of the warning: warning: the record with the same fingerprint "3b23d91f68e8cf8f7e22583a9191c3b4d6f49019fca6ba7b868c288b741c7e59" was already recorded at path "config/pod/openshift-multus/multus-6gjsz.json", recording another one with a different path "config/pod/openshift-multus/multus-6gjsz.json" I0427 16:15:54.362828 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-6gjsz/kube-multus_previous.log with fingerprint=b4227e2729860e74a62dfb67d5b3df1d1212aaa20e9a842d15b9a8b8557e82db I0427 16:15:54.362849 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-6gjsz/kube-multus_current.log with fingerprint=b4227e2729860e74a62dfb67d5b3df1d1212aaa20e9a842d15b9a8b8557e82db W0427 16:15:54.362861 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/logs/multus-6gjsz/kube-multus_current.log" because of the warning: warning: the record with the same fingerprint "b4227e2729860e74a62dfb67d5b3df1d1212aaa20e9a842d15b9a8b8557e82db" was already recorded at path "config/pod/openshift-multus/logs/multus-6gjsz/kube-multus_previous.log", recording another one with a different path "config/pod/openshift-multus/logs/multus-6gjsz/kube-multus_current.log" I0427 16:15:54.362956 1 recorder.go:75] Recording config/pod/openshift-multus/multus-7x265 with fingerprint=5a7efd5706c1b979a9687b696eff1e771a93dc5d71244c3505c59ea4119da6a0 E0427 16:15:54.362981 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-7x265.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-7x265.json" was already recorded and had the fingerprint "5a7efd5706c1b979a9687b696eff1e771a93dc5d71244c3505c59ea4119da6a0", overwriting with the record having fingerprint "5a7efd5706c1b979a9687b696eff1e771a93dc5d71244c3505c59ea4119da6a0" W0427 16:15:54.362989 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-7x265.json" because of the warning: warning: the record with the same fingerprint "5a7efd5706c1b979a9687b696eff1e771a93dc5d71244c3505c59ea4119da6a0" was already recorded at path "config/pod/openshift-multus/multus-7x265.json", recording another one with a different path "config/pod/openshift-multus/multus-7x265.json" I0427 16:15:54.362997 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-7x265/kube-multus_previous.log with fingerprint=fc42d7c90dc7714b7ef5c8c0f2bbd1ae94de041bf0a00748d4118bcec86fe89f I0427 16:15:54.363004 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-7x265/kube-multus_current.log with fingerprint=fc42d7c90dc7714b7ef5c8c0f2bbd1ae94de041bf0a00748d4118bcec86fe89f W0427 16:15:54.363011 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/logs/multus-7x265/kube-multus_current.log" because of the warning: warning: the record with the same fingerprint "fc42d7c90dc7714b7ef5c8c0f2bbd1ae94de041bf0a00748d4118bcec86fe89f" was already recorded at path "config/pod/openshift-multus/logs/multus-7x265/kube-multus_previous.log", recording another one with a different path "config/pod/openshift-multus/logs/multus-7x265/kube-multus_current.log" I0427 16:15:54.363017 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/egress-router-binary-copy_current.log with fingerprint=a9a82ec9a26183a9251651226961a3cace7ac20f6406e04585c577e11249f809 I0427 16:15:54.363025 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/cni-plugins_current.log with fingerprint=f560ce0dc0a4cef0a7d13399ef061fe358246376f4686b20af5b497c6657dd85 I0427 16:15:54.363030 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/bond-cni-plugin_current.log with fingerprint=f4b0f37cc75c5f19193ea9cac06e08b19ba3c207651b1b260c0c7d214eb32935 I0427 16:15:54.363035 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/routeoverride-cni_current.log with fingerprint=de0fbd527f20d1d9bf4c2e4f72bd92e673d022b8ad34c784148bf16134c14865 I0427 16:15:54.363039 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/whereabouts-cni-bincopy_current.log with fingerprint=d9e3342b411efcfc85b0dafae67a9e36ae1f722c51a94b593c64570703a88ef5 I0427 16:15:54.363043 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-87c64/whereabouts-cni_current.log with fingerprint=43876f5200e74299b27d2f1d5831ad6f03107b0a1e1814dca9a3712b81b0f1b1 I0427 16:15:54.363048 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/egress-router-binary-copy_current.log with fingerprint=529b5f0e6c31be5c018b76a9c6c428f910c2210096a0c8c92ae4a63376810445 I0427 16:15:54.363053 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/cni-plugins_current.log with fingerprint=a326ad30a625d2cd52882e3b9a9d5ea536fce6f7f55a3e7c37e4e9c0e07db73d I0427 16:15:54.363057 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/bond-cni-plugin_current.log with fingerprint=ae77e94931401224d1d91c5c6671524a7fa6dad87e231e93e78ba9cd04a46d13 I0427 16:15:54.363062 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/routeoverride-cni_current.log with fingerprint=a954720570278bdbbfd02ab0a1e7824d2d2a50bb1f305c7383810aa1b0ee6bb6 I0427 16:15:54.363067 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/whereabouts-cni-bincopy_current.log with fingerprint=ccc7cec800735731d5e543cd32146746086f0bbd73ae87961d71d9197284966d I0427 16:15:54.363071 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-8sxrc/whereabouts-cni_current.log with fingerprint=aa43825005c205db995f817005443bccd1c7b80125084f4dea6f09085d290761 I0427 16:15:54.363076 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/egress-router-binary-copy_current.log with fingerprint=c874e4734aa1c97fcdfaf5f07d677d27f91a0a55225a9b3f410fdc8c93d0b2b8 I0427 16:15:54.363081 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/cni-plugins_current.log with fingerprint=27183b9c7d8674b614bd99893743af5b3c84f6d228351af9ed03f0c13328f436 I0427 16:15:54.363086 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/bond-cni-plugin_current.log with fingerprint=7d1ddd02d8051277ada4ccb71e2f0639f2012107675ad5098143761edf22c670 I0427 16:15:54.363090 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/routeoverride-cni_current.log with fingerprint=a32658f5acbea03e07c5aaf0dc0e57914d4d4061716e0d616da490b77ca3561e I0427 16:15:54.363095 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/whereabouts-cni-bincopy_current.log with fingerprint=39902b5e5fd9232643de7e7baf7cfbcfbf930293dbfe872721eadcd822510191 I0427 16:15:54.363099 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-9gvrd/whereabouts-cni_current.log with fingerprint=3699aaa116d2734f8408da5120cb8575dd356d783426480f50564682cb672d49 I0427 16:15:54.363194 1 recorder.go:75] Recording config/pod/openshift-multus/multus-gswlm with fingerprint=0a80524693195293a7550786551d98b39c179be912f0939a844066226e18a4fe E0427 16:15:54.363204 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-gswlm.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-gswlm.json" was already recorded and had the fingerprint "0a80524693195293a7550786551d98b39c179be912f0939a844066226e18a4fe", overwriting with the record having fingerprint "0a80524693195293a7550786551d98b39c179be912f0939a844066226e18a4fe" W0427 16:15:54.363212 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-gswlm.json" because of the warning: warning: the record with the same fingerprint "0a80524693195293a7550786551d98b39c179be912f0939a844066226e18a4fe" was already recorded at path "config/pod/openshift-multus/multus-gswlm.json", recording another one with a different path "config/pod/openshift-multus/multus-gswlm.json" I0427 16:15:54.363222 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-gswlm/kube-multus_previous.log with fingerprint=d99d2dd083cd4bee384e173771709269a8ed38c60cb1fd7026762214c5927929 I0427 16:15:54.363324 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-gswlm/kube-multus_current.log with fingerprint=d655faf1c34922e80455b2cc329e90e500920279895137eca9179725f58babfb I0427 16:15:54.363388 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-mxqv4 with fingerprint=3ea5e03a6d78323dc3b01e0ddbda7cd3c45a56239b2fd6e30b074d65d74fb045 I0427 16:15:54.363446 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-v757n with fingerprint=560e940418b091fd60d9560a7cf4d972e85709abbc0cde02fab1bee17864454a I0427 16:15:54.363502 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-w7pl5 with fingerprint=89c56815e4671edd5e235f7893e144b2cee619a434155152e246a9ee6125a98b I0427 16:15:54.363735 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9 with fingerprint=74288eb29d48685af131edf2e0ec624dfd43aae07f708a22f78a399cdd2d0c12 E0427 16:15:54.363745 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json" because of the error: the record with the same name "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json" was already recorded and had the fingerprint "74288eb29d48685af131edf2e0ec624dfd43aae07f708a22f78a399cdd2d0c12", overwriting with the record having fingerprint "74288eb29d48685af131edf2e0ec624dfd43aae07f708a22f78a399cdd2d0c12" W0427 16:15:54.363753 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json" because of the warning: warning: the record with the same fingerprint "74288eb29d48685af131edf2e0ec624dfd43aae07f708a22f78a399cdd2d0c12" was already recorded at path "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json", recording another one with a different path "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json" I0427 16:15:54.363840 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/ovnkube-controller_previous.log with fingerprint=faf582711cdf6e5acc8606b36fca89caa89a4298ac1f5d8a9cbf06d177a0f7e8 I0427 16:15:54.363885 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/ovn-controller_current.log with fingerprint=a51449ce19b72af6657fbdc2cf21e8a54974264acb65507050800191178ad6b8 I0427 16:15:54.363908 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/ovn-acl-logging_current.log with fingerprint=69a152cb73c78d36d6b1d1447391e985486541d32962bed98e65a90a2f09b157 I0427 16:15:54.363929 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/kube-rbac-proxy-node_current.log with fingerprint=e2c614f4e50a5ababab8601ef16d8886ee84f6f0f331b30a977ebd13f141e24e I0427 16:15:54.363952 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=abf362fa9a7eabad3f87ef0f4dd264423e31a103e3c443e75f919ba2be5b8701 I0427 16:15:54.363985 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/northd_current.log with fingerprint=92161e80cebb3f558510d09641147304421643b3b3d359c1b94b93c3a1bd8c68 I0427 16:15:54.364000 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/nbdb_current.log with fingerprint=07dcae7c4367f95331deb99e51c126e687a0a670d43e13bfa089a5f7a191db7f I0427 16:15:54.364010 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/sbdb_current.log with fingerprint=1a13e6e711577c9677598d1b5fd6bda89b4b3b4cdeed705d5e8016f97c285810 I0427 16:15:54.364068 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-frxv9/ovnkube-controller_current.log with fingerprint=2ce3a24ff23dc6345df91c7f42241547e674065ae0fb235e2477002ce5c3e60a I0427 16:15:54.364122 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/ovn-controller_current.log with fingerprint=097bd47d3f90f3ce6c6efb09a5bd1f6346e694d60f3ffdbda9ce30dfddb280d6 I0427 16:15:54.364143 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/ovn-acl-logging_current.log with fingerprint=2c3223f4115671ac1eac6f9ff5a6c305613c6dfa3c646f92d1a33d5cfb13a2f9 I0427 16:15:54.364168 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/kube-rbac-proxy-node_current.log with fingerprint=eac68f1c38d15fa40e5b82996a0afc43591733cc1b8d14259e0fba6d1d6cd51d I0427 16:15:54.364193 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=ece09cd33d3ff7abf099af64f336834e55a598a3ceb83513ca0dc39b497478d9 I0427 16:15:54.364213 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/northd_current.log with fingerprint=183788ffa62eb7bd7688b25e9644a1daa968cac97469c02a0d59078d159845ec I0427 16:15:54.364225 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/nbdb_current.log with fingerprint=61a8f354881b22d2a45d11f30fdf4e625da7337e1e15f41255a605a1207deac8 I0427 16:15:54.364235 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/sbdb_current.log with fingerprint=3b9a5b6152122f2605bf82ad4115a204fdf7a6ba36f2cb12df3784a59404e24f I0427 16:15:54.364340 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-pdllw/ovnkube-controller_current.log with fingerprint=ece049281f3887c6d0efda8385c6c801da7464a0b11f819d84f8a9d8736019ab I0427 16:15:54.364583 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7 with fingerprint=f2575dd1b857defb5cb14873022fd16d4a6d7b56fd82910d73f3022e021a2771 E0427 16:15:54.364593 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json" because of the error: the record with the same name "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json" was already recorded and had the fingerprint "f2575dd1b857defb5cb14873022fd16d4a6d7b56fd82910d73f3022e021a2771", overwriting with the record having fingerprint "f2575dd1b857defb5cb14873022fd16d4a6d7b56fd82910d73f3022e021a2771" W0427 16:15:54.364601 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json" because of the warning: warning: the record with the same fingerprint "f2575dd1b857defb5cb14873022fd16d4a6d7b56fd82910d73f3022e021a2771" was already recorded at path "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json", recording another one with a different path "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json" I0427 16:15:54.364654 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/ovnkube-controller_previous.log with fingerprint=9ac33862e5bcf9d0263d99d360f9d3edce1adf7c42ea6225e8a091a9f5785bc2 I0427 16:15:54.364709 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/ovn-controller_current.log with fingerprint=bdfcff18cc095047314aff8dfa59fa8998d256f2396d8495b339dbaf01aa8eba I0427 16:15:54.364731 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/ovn-acl-logging_current.log with fingerprint=16dc4b83073b80227d3533f590ba6f2a7a64ce74081f04971c9b7daf8b0f7de0 I0427 16:15:54.364753 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/kube-rbac-proxy-node_current.log with fingerprint=f01bfb8b08ffb13046c48a888fbc59f798b71e3994ace83d905643d10a8202f4 I0427 16:15:54.364774 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=0c36de78b2d01939f097000653e3a20e7eda481a77042034c0c7810239abf3c6 I0427 16:15:54.364795 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/northd_current.log with fingerprint=51d3cc9c9b7fd11dec49136fa07b7870672a5434601ed6e180afb791661d9b6d I0427 16:15:54.364808 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/nbdb_current.log with fingerprint=f6cefb94d9038b596c75f0e1b2ce8b6dcba7eaf95027b0482d1c252047e0b7b8 I0427 16:15:54.364818 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/sbdb_current.log with fingerprint=78f3e7a8af1f9c419dd346f05cb8d8323de1fc449cab879ad48418edfb0f9ba9 I0427 16:15:54.364879 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-z88g7/ovnkube-controller_current.log with fingerprint=acc9246df9251f3f489982e340c9c68f52b6746a27750ec31126d2b37d711bad I0427 16:15:54.364897 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/logs/network-check-source-6b8cd5b79b-mpprf/check-endpoints_current.log with fingerprint=7f0e1bc4a6a5909c4fef27e808ca88d474de008097875f2d3c9940bfefbb2776 I0427 16:15:54.364957 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-6m6gk with fingerprint=5ece18ed91a6f318653adcec46e055b1e0ceef66f3648a9a5971406db2cde590 I0427 16:15:54.365020 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-kj26v with fingerprint=cfd2de144d7ce4de6eb0c003df0cd2eacceb4c81a71e62e61cae3aefe6280649 I0427 16:15:54.365092 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-xlmvm with fingerprint=bc761f82527320700f4e24a728b534f93477cc5dfec974f08606bab26654cdc0 I0427 16:15:54.365159 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-6577r with fingerprint=14d6a979d610166d182cc37d6c02d449c19ff00192c4f53dfb7ec58d891074d6 I0427 16:15:54.365216 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-wcwwv with fingerprint=93f5e688f042e66a5f70a190dba2529badc58a501693d6c1b5650d36283c8061 I0427 16:15:54.365222 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-g55hx/iptables-alerter_current.log with fingerprint=9cc2ab6a3f2d957c8143b1dca114f29e833c928711541bc0d8dc01057891e972 I0427 16:15:54.365227 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-j7s86/iptables-alerter_current.log with fingerprint=af54ea44a78872cc9d4de13ac821b3f6a37c4c6894d42e1bf324980c06a77a64 I0427 16:15:54.365231 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-vmcwr/iptables-alerter_current.log with fingerprint=d4c38901ffdf199ed61acb6e4c28ddded4afbd964a189edc5e5bd85d0649d9c1 I0427 16:15:54.365236 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 23.062689807s to process 85 records E0427 16:15:54.365314 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 23.07s with: function \"support_secret\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"machines\" failed with an error, function \"ingress_certificates\" failed with an error, function \"config_maps\" failed with an error, function \"dvo_metrics\" failed with an error, unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-6gjsz.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-7x265.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-gswlm.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json\"" I0427 16:15:54.366422 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "ingress_certificates" failed with an error, function "config_maps" failed with an error, function "dvo_metrics" failed with an error, unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-6gjsz.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-7x265.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-gswlm.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-ovn-kubernetes/ovnkube-node-frxv9.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-ovn-kubernetes/ovnkube-node-z88g7.json" I0427 16:15:54.366435 1 periodic.go:209] Running workloads gatherer I0427 16:15:54.366449 1 tasks_processing.go:45] number of workers: 2 I0427 16:15:54.366456 1 tasks_processing.go:69] worker 1 listening for tasks. I0427 16:15:54.366460 1 tasks_processing.go:71] worker 1 working on workload_info task. I0427 16:15:54.366465 1 tasks_processing.go:69] worker 0 listening for tasks. I0427 16:15:54.366477 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0427 16:15:54.393499 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 21s for image data I0427 16:15:54.401821 1 tasks_processing.go:74] worker 0 stopped. I0427 16:15:54.401842 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 35.332181ms to process 0 records I0427 16:15:54.402964 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (10ms) I0427 16:15:54.412194 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (9ms) I0427 16:15:54.421908 1 gather_workloads_info.go:387] No image sha256:80748ba08e1c264a8c105e7f607eff386a66378e024443a844993ee9292858c1 (10ms) I0427 16:15:54.431020 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (9ms) I0427 16:15:54.440080 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (9ms) I0427 16:15:54.449475 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (9ms) I0427 16:15:54.458987 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (9ms) I0427 16:15:54.468427 1 gather_workloads_info.go:387] No image sha256:ae7d3453fd734ecc865e5f9bb16f29244ebffe6291b77e1d4e496f71eb053174 (9ms) I0427 16:15:54.477707 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (9ms) I0427 16:15:54.486943 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (9ms) I0427 16:15:54.509164 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (22ms) I0427 16:15:54.605273 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (96ms) I0427 16:15:54.704314 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (99ms) I0427 16:15:54.804079 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (100ms) I0427 16:15:54.903188 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (99ms) I0427 16:15:55.003429 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (100ms) I0427 16:15:55.103841 1 gather_workloads_info.go:387] No image sha256:50197f22710766515f67944a779e00dd9ae3d17b18732d7324a970353b11f292 (100ms) I0427 16:15:55.203632 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (100ms) I0427 16:15:55.303584 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0427 16:15:55.405488 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (102ms) I0427 16:15:55.503675 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (98ms) I0427 16:15:55.603121 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (99ms) I0427 16:15:55.703863 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (101ms) I0427 16:15:55.804136 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (100ms) I0427 16:15:55.903894 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (100ms) I0427 16:15:56.003835 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (100ms) I0427 16:15:56.103753 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0427 16:15:56.203737 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (100ms) I0427 16:15:56.309178 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (105ms) I0427 16:15:56.403689 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (94ms) I0427 16:15:56.503874 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (100ms) I0427 16:15:56.603357 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (99ms) I0427 16:15:56.603391 1 tasks_processing.go:74] worker 1 stopped. E0427 16:15:56.603401 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0427 16:15:56.603688 1 recorder.go:75] Recording config/workload_info with fingerprint=fbdea3e059cac52a4ca2fc8c7d5d5e9ec03ba2e26fdc0f8b9b8ddceec64b548b I0427 16:15:56.603703 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.236924887s to process 1 records E0427 16:15:56.603729 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.237s with: function \"workload_info\" failed with an error" I0427 16:15:56.604825 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0427 16:15:56.604837 1 periodic.go:209] Running conditional gatherer I0427 16:15:56.612702 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0427 16:15:56.618898 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.6:51582->172.30.0.10:53: read: connection refused E0427 16:15:56.619138 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0427 16:15:56.619196 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0427 16:15:56.628044 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0427 16:15:56.628058 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628064 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628069 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628074 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628079 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628084 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628086 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628091 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0427 16:15:56.628093 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0427 16:15:56.628107 1 tasks_processing.go:45] number of workers: 3 I0427 16:15:56.628117 1 tasks_processing.go:69] worker 2 listening for tasks. I0427 16:15:56.628122 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0427 16:15:56.628128 1 tasks_processing.go:69] worker 0 listening for tasks. I0427 16:15:56.628140 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0427 16:15:56.628141 1 tasks_processing.go:69] worker 1 listening for tasks. I0427 16:15:56.628146 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0427 16:15:56.628150 1 tasks_processing.go:74] worker 1 stopped. I0427 16:15:56.628208 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0427 16:15:56.628223 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.032µs to process 1 records I0427 16:15:56.628258 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0427 16:15:56.628267 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.106µs to process 1 records I0427 16:15:56.628273 1 tasks_processing.go:74] worker 0 stopped. I0427 16:15:56.628382 1 tasks_processing.go:74] worker 2 stopped. I0427 16:15:56.628393 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 229.092µs to process 0 records I0427 16:15:56.628412 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.6:51582->172.30.0.10:53: read: connection refused I0427 16:15:56.628429 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0427 16:15:56.653774 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=11a6485b5042b8097340d58c852e3b4709f5a7e80d912781abdc72adf04432e9 I0427 16:15:56.653914 1 diskrecorder.go:70] Writing 179 records to /var/lib/insights-operator/insights-2026-04-27-161556.tar.gz I0427 16:15:56.667753 1 diskrecorder.go:51] Wrote 179 records to disk in 13ms I0427 16:15:56.667782 1 periodic.go:278] Gathering cluster info every 2h0m0s I0427 16:15:56.667797 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0427 16:15:58.541686 1 configmapobserver.go:84] configmaps "insights-config" not found I0427 16:16:55.587150 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="8620acdcafc49a806625fae6ce544b3d0e43ff9d14a03d9477ae40dfda10a063") W0427 16:16:55.587192 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0427 16:16:55.587243 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="8946fe7c1891f7065d6b9c3f20dce0e20ea74c2b189fa0a5f25ecc5735c49fad") I0427 16:16:55.587303 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0427 16:16:55.587321 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0427 16:16:55.587338 1 periodic.go:170] Shutting down