W0420 22:52:13.848542 1 cmd.go:257] Using insecure, self-signed certificates I0420 22:52:14.680027 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:14.680440 1 observer_polling.go:159] Starting file observer I0420 22:52:15.312727 1 operator.go:76] Starting insights-operator v0.0.0-master+$Format:%H$ I0420 22:52:15.313014 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0420 22:52:15.313937 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0420 22:52:15.313957 1 secure_serving.go:57] Forcing use of http/1.1 only W0420 22:52:15.313978 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0420 22:52:15.313984 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0420 22:52:15.313990 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0420 22:52:15.313995 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0420 22:52:15.313998 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0420 22:52:15.314002 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0420 22:52:15.317943 1 operator.go:141] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GCPClusterHostedDNSInstall GatewayAPI GatewayAPIController HighlyAvailableArbiter HyperShiftOnlyDynamicResourceAllocation ImageStreamImportMode ImageVolume KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS ManagedBootImagesAzure ManagedBootImagesvSphere MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages PreconfiguredUDNAddresses ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SigstoreImageVerification SigstoreImageVerificationPKI StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSDualStackInstall AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureDualStackInstall AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement CBORServingAndStorage CRDCompatibilityRequirementOperator ClientsAllowCBOR ClientsPreferCBOR ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterAPIMachineManagement ClusterAPIMachineManagementVSphere ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud EtcdBackendQuota EventTTL EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall GCPDualStackInstall ImageModeStatusReporting IngressControllerDynamicConfigurationManager InsightsConfig InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryption KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesCPMS MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutableCSINodeAllocatableCount MutatingAdmissionPolicy NewOLM NewOLMBoxCutterRuntime NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterInstall NutanixMultiSubnets OSStreams OVNObservability OnPremDNSRecords OpenShiftPodSecurityAdmission ProvisioningRequestAvailable SELinuxMount ShortCertRotation SignatureStores TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeGroupSnapshot] I0420 22:52:15.317960 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"318f1b87-1680-45c8-b98c-8f0ebc803d32", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNSInstall", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "HyperShiftOnlyDynamicResourceAllocation", "ImageStreamImportMode", "ImageVolume", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSDualStackInstall", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureDualStackInstall", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "CBORServingAndStorage", "CRDCompatibilityRequirementOperator", "ClientsAllowCBOR", "ClientsPreferCBOR", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterAPIMachineManagement", "ClusterAPIMachineManagementVSphere", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "EtcdBackendQuota", "EventTTL", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "GCPDualStackInstall", "ImageModeStatusReporting", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryption", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesCPMS", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutableCSINodeAllocatableCount", "MutatingAdmissionPolicy", "NewOLM", "NewOLMBoxCutterRuntime", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterInstall", "NutanixMultiSubnets", "OSStreams", "OVNObservability", "OnPremDNSRecords", "OpenShiftPodSecurityAdmission", "ProvisioningRequestAvailable", "SELinuxMount", "ShortCertRotation", "SignatureStores", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeGroupSnapshot"}} I0420 22:52:15.318665 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0420 22:52:15.318688 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0420 22:52:15.318682 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0420 22:52:15.318681 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0420 22:52:15.318718 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0420 22:52:15.318724 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0420 22:52:15.319018 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-935329134/tls.crt::/tmp/serving-cert-935329134/tls.key" I0420 22:52:15.319356 1 secure_serving.go:213] Serving securely on [::]:8443 I0420 22:52:15.319384 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0420 22:52:15.323226 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0420 22:52:15.323250 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0420 22:52:15.323344 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0420 22:52:15.328584 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0420 22:52:15.328601 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0420 22:52:15.335977 1 secretconfigobserver.go:119] support secret does not exist I0420 22:52:15.340171 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0420 22:52:15.343794 1 secretconfigobserver.go:119] support secret does not exist I0420 22:52:15.347014 1 recorder.go:176] Pruning old reports every 5h41m16s, max age is 288h0m0s I0420 22:52:15.351727 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0420 22:52:15.351747 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0420 22:52:15.351753 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0420 22:52:15.351762 1 insightsreport.go:296] Starting report retriever I0420 22:52:15.351763 1 periodic.go:216] Running clusterconfig gatherer I0420 22:52:15.351769 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0420 22:52:15.351805 1 tasks_processing.go:45] number of workers: 64 I0420 22:52:15.351834 1 tasks_processing.go:69] worker 2 listening for tasks. I0420 22:52:15.351847 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 22:52:15.351852 1 tasks_processing.go:69] worker 13 listening for tasks. I0420 22:52:15.351860 1 tasks_processing.go:69] worker 24 listening for tasks. I0420 22:52:15.351866 1 tasks_processing.go:71] worker 13 working on openshift_logging task. I0420 22:52:15.351871 1 tasks_processing.go:69] worker 33 listening for tasks. I0420 22:52:15.351874 1 tasks_processing.go:69] worker 44 listening for tasks. I0420 22:52:15.351873 1 tasks_processing.go:69] worker 19 listening for tasks. I0420 22:52:15.351870 1 tasks_processing.go:69] worker 18 listening for tasks. I0420 22:52:15.351879 1 tasks_processing.go:69] worker 20 listening for tasks. I0420 22:52:15.351887 1 tasks_processing.go:69] worker 17 listening for tasks. I0420 22:52:15.351894 1 tasks_processing.go:69] worker 45 listening for tasks. I0420 22:52:15.351896 1 tasks_processing.go:69] worker 22 listening for tasks. I0420 22:52:15.351900 1 tasks_processing.go:69] worker 34 listening for tasks. I0420 22:52:15.351895 1 tasks_processing.go:69] worker 15 listening for tasks. I0420 22:52:15.351908 1 tasks_processing.go:69] worker 23 listening for tasks. I0420 22:52:15.351912 1 tasks_processing.go:69] worker 30 listening for tasks. I0420 22:52:15.351898 1 tasks_processing.go:69] worker 27 listening for tasks. I0420 22:52:15.351917 1 tasks_processing.go:69] worker 36 listening for tasks. I0420 22:52:15.351920 1 tasks_processing.go:69] worker 57 listening for tasks. I0420 22:52:15.351881 1 tasks_processing.go:69] worker 16 listening for tasks. I0420 22:52:15.351922 1 tasks_processing.go:69] worker 29 listening for tasks. I0420 22:52:15.351927 1 tasks_processing.go:69] worker 37 listening for tasks. I0420 22:52:15.351922 1 tasks_processing.go:69] worker 32 listening for tasks. I0420 22:52:15.351932 1 tasks_processing.go:69] worker 40 listening for tasks. I0420 22:52:15.351883 1 tasks_processing.go:69] worker 25 listening for tasks. I0420 22:52:15.351937 1 tasks_processing.go:69] worker 42 listening for tasks. I0420 22:52:15.351939 1 tasks_processing.go:69] worker 5 listening for tasks. I0420 22:52:15.351937 1 tasks_processing.go:69] worker 8 listening for tasks. I0420 22:52:15.351945 1 tasks_processing.go:69] worker 3 listening for tasks. I0420 22:52:15.351948 1 tasks_processing.go:69] worker 54 listening for tasks. I0420 22:52:15.351947 1 tasks_processing.go:69] worker 50 listening for tasks. I0420 22:52:15.351948 1 tasks_processing.go:69] worker 4 listening for tasks. I0420 22:52:15.351949 1 tasks_processing.go:69] worker 39 listening for tasks. I0420 22:52:15.351958 1 tasks_processing.go:69] worker 55 listening for tasks. I0420 22:52:15.351957 1 tasks_processing.go:69] worker 7 listening for tasks. I0420 22:52:15.351961 1 tasks_processing.go:69] worker 10 listening for tasks. I0420 22:52:15.351966 1 tasks_processing.go:69] worker 12 listening for tasks. I0420 22:52:15.351962 1 tasks_processing.go:69] worker 60 listening for tasks. I0420 22:52:15.351972 1 tasks_processing.go:71] worker 18 working on machine_sets task. I0420 22:52:15.351968 1 tasks_processing.go:71] worker 33 working on image_registries task. I0420 22:52:15.351976 1 tasks_processing.go:71] worker 19 working on container_runtime_configs task. I0420 22:52:15.351975 1 tasks_processing.go:71] worker 20 working on certificate_signing_requests task. I0420 22:52:15.351982 1 tasks_processing.go:71] worker 34 working on jaegers task. I0420 22:52:15.351966 1 tasks_processing.go:69] worker 51 listening for tasks. I0420 22:52:15.351981 1 tasks_processing.go:71] worker 24 working on version task. I0420 22:52:15.351982 1 tasks_processing.go:69] worker 61 listening for tasks. I0420 22:52:15.351996 1 tasks_processing.go:71] worker 15 working on openstack_dataplanedeployments task. I0420 22:52:15.351994 1 tasks_processing.go:69] worker 62 listening for tasks. I0420 22:52:15.352000 1 tasks_processing.go:71] worker 45 working on image task. I0420 22:52:15.352006 1 tasks_processing.go:71] worker 36 working on ceph_cluster task. I0420 22:52:15.352009 1 tasks_processing.go:71] worker 8 working on config_maps task. I0420 22:52:15.352011 1 tasks_processing.go:71] worker 62 working on machine_healthchecks task. I0420 22:52:15.352007 1 tasks_processing.go:71] worker 22 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0420 22:52:15.351935 1 tasks_processing.go:69] worker 53 listening for tasks. I0420 22:52:15.351877 1 tasks_processing.go:69] worker 43 listening for tasks. I0420 22:52:15.351904 1 tasks_processing.go:69] worker 46 listening for tasks. I0420 22:52:15.351905 1 tasks_processing.go:69] worker 28 listening for tasks. I0420 22:52:15.351910 1 tasks_processing.go:69] worker 35 listening for tasks. I0420 22:52:15.351912 1 tasks_processing.go:69] worker 47 listening for tasks. I0420 22:52:15.351914 1 tasks_processing.go:69] worker 31 listening for tasks. I0420 22:52:15.351920 1 tasks_processing.go:69] worker 56 listening for tasks. I0420 22:52:15.351928 1 tasks_processing.go:69] worker 58 listening for tasks. I0420 22:52:15.351927 1 tasks_processing.go:69] worker 48 listening for tasks. I0420 22:52:15.351920 1 tasks_processing.go:69] worker 14 listening for tasks. I0420 22:52:15.351933 1 tasks_processing.go:69] worker 41 listening for tasks. I0420 22:52:15.352076 1 tasks_processing.go:71] worker 7 working on storage_classes task. I0420 22:52:15.351934 1 tasks_processing.go:69] worker 52 listening for tasks. I0420 22:52:15.351935 1 tasks_processing.go:69] worker 59 listening for tasks. I0420 22:52:15.351889 1 tasks_processing.go:69] worker 21 listening for tasks. I0420 22:52:15.351940 1 tasks_processing.go:69] worker 49 listening for tasks. I0420 22:52:15.351941 1 tasks_processing.go:69] worker 38 listening for tasks. I0420 22:52:15.351942 1 tasks_processing.go:69] worker 63 listening for tasks. I0420 22:52:15.351947 1 tasks_processing.go:71] worker 2 working on support_secret task. I0420 22:52:15.352136 1 tasks_processing.go:71] worker 12 working on aggregated_monitoring_cr_names task. I0420 22:52:15.352186 1 tasks_processing.go:71] worker 54 working on nodenetworkconfigurationpolicies task. I0420 22:52:15.352207 1 tasks_processing.go:71] worker 60 working on sap_pods task. I0420 22:52:15.351997 1 tasks_processing.go:71] worker 30 working on metrics task. I0420 22:52:15.352243 1 tasks_processing.go:71] worker 39 working on openshift_machine_api_events task. I0420 22:52:15.351986 1 tasks_processing.go:71] worker 23 working on image_pruners task. I0420 22:52:15.352259 1 tasks_processing.go:71] worker 29 working on validating_webhook_configurations task. I0420 22:52:15.352265 1 tasks_processing.go:71] worker 28 working on ingress_certificates task. I0420 22:52:15.352375 1 tasks_processing.go:71] worker 25 working on sap_config task. I0420 22:52:15.351955 1 tasks_processing.go:69] worker 11 listening for tasks. I0420 22:52:15.352462 1 tasks_processing.go:71] worker 11 working on proxies task. I0420 22:52:15.352518 1 tasks_processing.go:71] worker 42 working on clusterroles task. I0420 22:52:15.351960 1 tasks_processing.go:69] worker 9 listening for tasks. I0420 22:52:15.351968 1 tasks_processing.go:71] worker 44 working on machine_config_pools task. I0420 22:52:15.352672 1 tasks_processing.go:71] worker 37 working on pod_network_connectivity_checks task. I0420 22:52:15.352727 1 tasks_processing.go:71] worker 4 working on cost_management_metrics_configs task. I0420 22:52:15.351855 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 22:52:15.352794 1 tasks_processing.go:71] worker 50 working on active_alerts task. I0420 22:52:15.352812 1 tasks_processing.go:71] worker 40 working on openstack_version task. I0420 22:52:15.352851 1 tasks_processing.go:71] worker 1 working on container_images task. W0420 22:52:15.352847 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:15.352872 1 tasks_processing.go:71] worker 50 working on qemu_kubevirt_launcher_logs task. I0420 22:52:15.352899 1 tasks_processing.go:71] worker 14 working on crds task. I0420 22:52:15.352934 1 tasks_processing.go:71] worker 5 working on machine_autoscalers task. I0420 22:52:15.351890 1 tasks_processing.go:69] worker 26 listening for tasks. I0420 22:52:15.352002 1 tasks_processing.go:71] worker 27 working on install_plans task. I0420 22:52:15.353021 1 tasks_processing.go:71] worker 31 working on service_accounts task. I0420 22:52:15.352071 1 tasks_processing.go:71] worker 55 working on oauths task. I0420 22:52:15.353244 1 tasks_processing.go:71] worker 9 working on storage_cluster task. I0420 22:52:15.352001 1 tasks_processing.go:71] worker 61 working on nodenetworkstates task. I0420 22:52:15.352248 1 tasks_processing.go:71] worker 32 working on authentication task. I0420 22:52:15.353382 1 tasks_processing.go:71] worker 41 working on nodes task. I0420 22:52:15.353476 1 tasks_processing.go:71] worker 21 working on openstack_controlplanes task. I0420 22:52:15.352253 1 tasks_processing.go:71] worker 57 working on operators task. W0420 22:52:15.352247 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:15.352256 1 tasks_processing.go:71] worker 16 working on sap_datahubs task. I0420 22:52:15.352256 1 tasks_processing.go:71] worker 43 working on lokistack task. I0420 22:52:15.352261 1 tasks_processing.go:71] worker 46 working on pdbs task. I0420 22:52:15.353485 1 tasks_processing.go:71] worker 49 working on feature_gates task. I0420 22:52:15.352251 1 tasks_processing.go:71] worker 53 working on dvo_metrics task. I0420 22:52:15.351992 1 tasks_processing.go:71] worker 10 working on monitoring_persistent_volumes task. I0420 22:52:15.351866 1 tasks_processing.go:71] worker 0 working on cluster_apiserver task. I0420 22:52:15.353500 1 tasks_processing.go:71] worker 38 working on olm_operators task. I0420 22:52:15.353507 1 tasks_processing.go:71] worker 35 working on machine_configs task. I0420 22:52:15.352245 1 tasks_processing.go:71] worker 63 working on overlapping_namespace_uids task. I0420 22:52:15.351994 1 tasks_processing.go:71] worker 51 working on mutating_webhook_configurations task. I0420 22:52:15.353508 1 tasks_processing.go:71] worker 47 working on schedulers task. I0420 22:52:15.353512 1 tasks_processing.go:71] worker 58 working on tsdb_status task. I0420 22:52:15.351991 1 tasks_processing.go:71] worker 17 working on infrastructures task. I0420 22:52:15.354088 1 tasks_processing.go:71] worker 26 working on node_features task. I0420 22:52:15.353504 1 tasks_processing.go:71] worker 3 working on networks task. I0420 22:52:15.353515 1 tasks_processing.go:71] worker 56 working on silenced_alerts task. W0420 22:52:15.354301 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:15.351951 1 tasks_processing.go:69] worker 6 listening for tasks. I0420 22:52:15.353518 1 tasks_processing.go:71] worker 48 working on node_logs task. I0420 22:52:15.354413 1 tasks_processing.go:71] worker 6 working on openstack_dataplanenodesets task. W0420 22:52:15.354416 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:15.353524 1 tasks_processing.go:71] worker 59 working on machines task. I0420 22:52:15.353517 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 53.537µs to process 0 records I0420 22:52:15.354805 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 1.347297ms to process 0 records I0420 22:52:15.354863 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 38.745µs to process 0 records I0420 22:52:15.354906 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 415.318µs to process 0 records I0420 22:52:15.354932 1 tasks_processing.go:71] worker 30 working on operators_pods_and_events task. I0420 22:52:15.353522 1 tasks_processing.go:71] worker 52 working on ingress task. I0420 22:52:15.354954 1 tasks_processing.go:74] worker 56 stopped. I0420 22:52:15.354962 1 tasks_processing.go:74] worker 58 stopped. I0420 22:52:15.356843 1 tasks_processing.go:74] worker 13 stopped. I0420 22:52:15.356857 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 4.964555ms to process 0 records I0420 22:52:15.358032 1 tasks_processing.go:74] worker 15 stopped. I0420 22:52:15.358050 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 6.02325ms to process 0 records I0420 22:52:15.358222 1 tasks_processing.go:74] worker 19 stopped. I0420 22:52:15.358237 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 6.236349ms to process 0 records I0420 22:52:15.358250 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 6.245218ms to process 0 records E0420 22:52:15.358259 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0420 22:52:15.358267 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 6.218471ms to process 0 records I0420 22:52:15.358273 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 6.22413ms to process 0 records I0420 22:52:15.358272 1 controller.go:129] Initializing last reported time to 0001-01-01T00:00:00Z I0420 22:52:15.358280 1 tasks_processing.go:74] worker 36 stopped. I0420 22:52:15.358286 1 tasks_processing.go:74] worker 18 stopped. I0420 22:52:15.358291 1 controller.go:254] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0420 22:52:15.358296 1 controller.go:254] Source periodic-conditional *controllerstatus.Simple is not ready I0420 22:52:15.358295 1 tasks_processing.go:74] worker 62 stopped. I0420 22:52:15.358299 1 controller.go:254] Source periodic-workloads *controllerstatus.Simple is not ready I0420 22:52:15.358318 1 controller.go:531] The operator is still being initialized I0420 22:52:15.358327 1 controller.go:554] The operator is healthy I0420 22:52:15.358419 1 tasks_processing.go:74] worker 45 stopped. I0420 22:52:15.358517 1 recorder.go:75] Recording config/image with fingerprint=4af3fccce3c62687da9b6a8822b048e29dc47d99cde4d60ce0eed6f519715b61 I0420 22:52:15.358530 1 gather.go:177] gatherer "clusterconfig" function "image" took 6.401465ms to process 1 records I0420 22:52:15.359527 1 tasks_processing.go:74] worker 34 stopped. I0420 22:52:15.359538 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 7.53359ms to process 0 records I0420 22:52:15.361647 1 tasks_processing.go:74] worker 37 stopped. E0420 22:52:15.361660 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0420 22:52:15.361667 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 8.952654ms to process 0 records I0420 22:52:15.366156 1 tasks_processing.go:74] worker 4 stopped. I0420 22:52:15.366166 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 13.409723ms to process 0 records I0420 22:52:15.368569 1 tasks_processing.go:74] worker 9 stopped. I0420 22:52:15.368586 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 15.309341ms to process 0 records I0420 22:52:15.376335 1 tasks_processing.go:74] worker 5 stopped. I0420 22:52:15.376347 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 23.387322ms to process 0 records I0420 22:52:15.378730 1 tasks_processing.go:74] worker 61 stopped. I0420 22:52:15.378746 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 25.368645ms to process 0 records I0420 22:52:15.378879 1 tasks_processing.go:74] worker 11 stopped. I0420 22:52:15.378944 1 recorder.go:75] Recording config/proxy with fingerprint=960161f0a5f0a214bb5d3a235bd3c5c8f51f8c7058071b73c9081d9d8ac68027 I0420 22:52:15.378956 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 26.403967ms to process 1 records I0420 22:52:15.381503 1 tasks_processing.go:74] worker 25 stopped. I0420 22:52:15.381512 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 29.053496ms to process 0 records I0420 22:52:15.382099 1 tasks_processing.go:74] worker 2 stopped. E0420 22:52:15.382125 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0420 22:52:15.382144 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 29.995495ms to process 0 records I0420 22:52:15.382303 1 tasks_processing.go:74] worker 54 stopped. I0420 22:52:15.382320 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 30.093558ms to process 0 records I0420 22:52:15.382451 1 tasks_processing.go:74] worker 32 stopped. I0420 22:52:15.382670 1 recorder.go:75] Recording config/authentication with fingerprint=fc9e579d5a4097070f48e74c5f38684804ce82de9ae53d44975b2c362185d959 I0420 22:52:15.382691 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 29.089898ms to process 1 records I0420 22:52:15.382775 1 tasks_processing.go:74] worker 55 stopped. I0420 22:52:15.382881 1 recorder.go:75] Recording config/oauth with fingerprint=56c3fd918e540c98ab09293928e9d08756adb3cf27d7685e0a929435f2943161 I0420 22:52:15.382892 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 29.345224ms to process 1 records I0420 22:52:15.383043 1 tasks_processing.go:74] worker 33 stopped. I0420 22:52:15.383367 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=e71562aca6e075f40bafa193e743c4e121d06cf4869ac4f030f62cf0ed85796d I0420 22:52:15.383380 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 30.806032ms to process 1 records I0420 22:52:15.383388 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 29.391613ms to process 0 records I0420 22:52:15.383391 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 29.384328ms to process 0 records I0420 22:52:15.383396 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 29.399906ms to process 0 records I0420 22:52:15.383400 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 30.760661ms to process 0 records I0420 22:52:15.383405 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 28.754565ms to process 0 records I0420 22:52:15.383449 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=fc5896c803a5ad0502892f7752f10e85838826bcd0c84bfc18657dbb119832c5 I0420 22:52:15.383454 1 tasks_processing.go:74] worker 21 stopped. I0420 22:52:15.383449 1 tasks_processing.go:74] worker 10 stopped. I0420 22:52:15.383458 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 29.359284ms to process 1 records I0420 22:52:15.383465 1 tasks_processing.go:74] worker 39 stopped. I0420 22:52:15.383466 1 tasks_processing.go:74] worker 43 stopped. I0420 22:52:15.383469 1 tasks_processing.go:74] worker 47 stopped. I0420 22:52:15.383477 1 tasks_processing.go:74] worker 48 stopped. I0420 22:52:15.383543 1 tasks_processing.go:74] worker 23 stopped. I0420 22:52:15.383557 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=04d930446ecd579ac86daed1afa0b5b10e5e8715c06e27c2a65da6ba74200c8b I0420 22:52:15.383571 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 31.096266ms to process 1 records I0420 22:52:15.383628 1 tasks_processing.go:74] worker 17 stopped. I0420 22:52:15.384192 1 gather_logs.go:145] no pods in namespace were found I0420 22:52:15.384258 1 recorder.go:75] Recording config/infrastructure with fingerprint=8596f989ad8f7b431ec1266b3c401371fe5178306d9f2678a7e5a91aa394abfe E0420 22:52:15.384277 1 gather_node_features.go:86] GatherNodeFeatures: NodeFeatures resource not found in openshift-nfd namespace (may not be installed) I0420 22:52:15.384278 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 29.571456ms to process 1 records I0420 22:52:15.384440 1 tasks_processing.go:74] worker 3 stopped. I0420 22:52:15.384496 1 recorder.go:75] Recording config/network with fingerprint=fac72aea228cba51cb4231dc9ea816a65e61ed285dbe97bc0fe3f2fbc130a84d I0420 22:52:15.384531 1 gather.go:177] gatherer "clusterconfig" function "networks" took 29.75145ms to process 1 records I0420 22:52:15.384615 1 tasks_processing.go:74] worker 7 stopped. I0420 22:52:15.384624 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=3669e3ed3af1f5f6030e33c682a0c49164478b36705e66252d595ba9454cf5cf I0420 22:52:15.384650 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=65d0f0a0f78e22adb0a0f4ee45c8e166e3b10476fbc8e896b9186e131d96c0d2 I0420 22:52:15.384662 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 31.957067ms to process 2 records I0420 22:52:15.384675 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 31.861077ms to process 0 records I0420 22:52:15.384681 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 30.402189ms to process 0 records I0420 22:52:15.384686 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 31.32544ms to process 0 records I0420 22:52:15.384688 1 tasks_processing.go:74] worker 60 stopped. I0420 22:52:15.384692 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 31.435579ms to process 0 records I0420 22:52:15.384697 1 tasks_processing.go:74] worker 50 stopped. I0420 22:52:15.384699 1 tasks_processing.go:74] worker 38 stopped. I0420 22:52:15.384710 1 tasks_processing.go:74] worker 40 stopped. E0420 22:52:15.384699 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0420 22:52:15.384726 1 gather.go:177] gatherer "clusterconfig" function "machines" took 29.666189ms to process 0 records I0420 22:52:15.384769 1 tasks_processing.go:74] worker 59 stopped. I0420 22:52:15.385018 1 gather.go:177] gatherer "clusterconfig" function "node_features" took 30.185299ms to process 0 records I0420 22:52:15.385031 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 30.666115ms to process 0 records I0420 22:52:15.385038 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 29.931438ms to process 0 records I0420 22:52:15.385083 1 tasks_processing.go:74] worker 16 stopped. I0420 22:52:15.385100 1 tasks_processing.go:74] worker 6 stopped. I0420 22:52:15.385137 1 tasks_processing.go:74] worker 46 stopped. I0420 22:52:15.385318 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=cfb42a9656d9d069aaf19fb64b31fe6948b8e16eaf2d9d823e679cd2ac018c8f I0420 22:52:15.385365 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=7bd4dc6a4d25ddcf63de6c97612ec120a74dbadf35bba936becbb0138518664f I0420 22:52:15.385388 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=a8758e38de0c07651046b9c1d9191e23554076b04b3fe1353177c717e67b384f I0420 22:52:15.385452 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 30.747392ms to process 3 records I0420 22:52:15.385685 1 tasks_processing.go:74] worker 26 stopped. I0420 22:52:15.385714 1 recorder.go:75] Recording config/ingress with fingerprint=2f6c39895737d578f62c317dc0e8c72392afdad11d68aaea385887dd0329fa76 I0420 22:52:15.385739 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 29.426233ms to process 1 records I0420 22:52:15.385797 1 tasks_processing.go:74] worker 52 stopped. I0420 22:52:15.385967 1 recorder.go:75] Recording config/apiserver with fingerprint=bd50c4fb5d7afbeaeadd02a72e5f47c43255b099e18d60be87d0fd7da1d53200 I0420 22:52:15.386011 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 30.894856ms to process 1 records I0420 22:52:15.386043 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 32.055978ms to process 0 records I0420 22:52:15.386247 1 recorder.go:75] Recording config/featuregate with fingerprint=c50b8f6ed0b9a730ed14555bdadadc649db26b58dbfc0b8426fced7271cf4c01 I0420 22:52:15.386288 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 31.655087ms to process 1 records I0420 22:52:15.386303 1 tasks_processing.go:74] worker 49 stopped. I0420 22:52:15.385994 1 tasks_processing.go:74] worker 0 stopped. I0420 22:52:15.386312 1 tasks_processing.go:74] worker 44 stopped. I0420 22:52:15.386520 1 tasks_processing.go:74] worker 29 stopped. I0420 22:52:15.387610 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=c849dd3d8d26db9b3bdc64323ba5aa3d664212653f4837c5325e113431696be1 I0420 22:52:15.387715 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=054e878bf625c9b6955191033aaef2d3f431daef55ce8aedb4ea447bb0e10937 I0420 22:52:15.387747 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=f3827e79d513297e2c9e6b7b7accfb4a777689e221b2b78ec0eb1dcc53e5f489 I0420 22:52:15.387771 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=36518932cdf2ef946aa52e4a0bf31284c309f4eb3366697ffd5d798b5eaf05f7 I0420 22:52:15.387793 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=d4c939e21b0d3523ba6ffbfd243bf811f7f0281f10b53177ca5b44cfe9ff6b64 I0420 22:52:15.387851 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=8c945b4fa7e8413d8edd1aaaf509a997aab2bb8fc5505b66286314514f5199aa I0420 22:52:15.387883 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=7c0640faf6d1af6e60b8fec07d2f2efdb1ae45d773f9824d4cf84a8546df9fa8 I0420 22:52:15.387924 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=5c24e33556bea56d3432acd6ca7caa7724786ef211eb59dc91e2be0c4aac99a2 I0420 22:52:15.387989 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=7a595befb00272b959107fbd869bf5cdbe9bc67f08c7af6997c4cc7307f60790 I0420 22:52:15.388018 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=40ea86c6b2e89075c10ffc90669d570313edd52d1860e963b13c3afe06e116e5 I0420 22:52:15.388042 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=eccb80f17bc54e70e3fe1056c1a4f27158c2a371d2696e2b98955cd7c2ae49bd I0420 22:52:15.388054 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 34.247598ms to process 11 records I0420 22:52:15.388131 1 tasks_processing.go:74] worker 41 stopped. I0420 22:52:15.388410 1 recorder.go:75] Recording config/node/ip-10-0-0-153.ec2.internal with fingerprint=cf7d5664bf11352ceec1a8bf999dd859c5f507d8c1fc10cdf085b9743a9c8659 I0420 22:52:15.388484 1 recorder.go:75] Recording config/node/ip-10-0-1-117.ec2.internal with fingerprint=7c64ff261a2ed4660c4aa058825025c1034d22cb94f6bb725e5b2c4a9ed9edf2 I0420 22:52:15.388536 1 recorder.go:75] Recording config/node/ip-10-0-2-216.ec2.internal with fingerprint=e50fda970bf04c3373050eabda68fc75aab83206a2d93f85acd3ee0d74f8fc7b I0420 22:52:15.388549 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 34.015463ms to process 3 records E0420 22:52:15.388558 1 gather.go:140] gatherer "clusterconfig" function "machine_configs" failed with the error: getting MachineConfigPools failed: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io) I0420 22:52:15.388594 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0420 22:52:15.388603 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 34.511847ms to process 1 records I0420 22:52:15.388626 1 tasks_processing.go:74] worker 35 stopped. I0420 22:52:15.388692 1 tasks_processing.go:74] worker 51 stopped. I0420 22:52:15.388700 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=f4b95e6f6a900ef0cfada3eb76b3a1ddf815fba74fdf1ed88290dbec5f79fd03 I0420 22:52:15.388735 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=7652a5a0cd330d2f16fd862ac475ab08649966b82e3a1cc5c3674a46328b140f I0420 22:52:15.388761 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=a684cf9fb97681138244f680f4ce107c2dca15d227d3db73cca7555ed7b512c9 I0420 22:52:15.388772 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 34.573315ms to process 3 records I0420 22:52:15.388777 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 36.565234ms to process 0 records I0420 22:52:15.388786 1 tasks_processing.go:74] worker 20 stopped. I0420 22:52:15.389789 1 tasks_processing.go:74] worker 63 stopped. I0420 22:52:15.389816 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0420 22:52:15.389830 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 36.034272ms to process 1 records W0420 22:52:15.391014 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 22:52:15.391724 1 tasks_processing.go:74] worker 42 stopped. I0420 22:52:15.391978 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0420 22:52:15.392074 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0420 22:52:15.392217 1 operator.go:328] started I0420 22:52:15.392255 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0420 22:52:15.392434 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=44b42557306aa24d3c06b36ec33dfd7029dca9ab2dbf814c2332d68115f6148a I0420 22:52:15.392706 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=d1389d692439c00187b45f182a3731b962833f5765c5656b80cf67a3072c69cf I0420 22:52:15.392750 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 39.183354ms to process 2 records I0420 22:52:15.392791 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 40.362049ms to process 0 records I0420 22:52:15.392818 1 tasks_processing.go:74] worker 12 stopped. I0420 22:52:15.397251 1 tasks_processing.go:74] worker 14 stopped. I0420 22:52:15.398050 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=4d52caa8ac6c1d74f1c9a113e0bdb7c109df70959d3faf33ba81a329169b4271 I0420 22:52:15.398416 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=73451a444d67039af886f634fbe012ed10189a980378689d67281df874aac4d8 I0420 22:52:15.398436 1 gather.go:177] gatherer "clusterconfig" function "crds" took 44.335961ms to process 2 records I0420 22:52:15.399903 1 controller.go:254] Source clusterTransferController *clustertransfer.Controller is not ready I0420 22:52:15.399972 1 controller.go:254] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0420 22:52:15.399999 1 controller.go:254] Source periodic-conditional *controllerstatus.Simple is not ready I0420 22:52:15.400022 1 controller.go:254] Source periodic-workloads *controllerstatus.Simple is not ready I0420 22:52:15.400046 1 controller.go:254] Source scaController *sca.Controller is not ready I0420 22:52:15.400137 1 controller.go:531] The operator is still being initialized I0420 22:52:15.400172 1 controller.go:554] The operator is healthy I0420 22:52:15.401668 1 tasks_processing.go:74] worker 1 stopped. I0420 22:52:15.401722 1 recorder.go:75] Recording config/running_containers with fingerprint=e07ffa23f2ebc68468eb6dbf43033a92fb667deb2473d7ce0d2fb06c5b7c906a I0420 22:52:15.401735 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 48.793217ms to process 1 records E0420 22:52:15.408570 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27781724a4-89b0-47b2-873d-b935037c31d5%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.10:33657->172.30.0.10:53: read: connection refused I0420 22:52:15.408584 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27781724a4-89b0-47b2-873d-b935037c31d5%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.10:33657->172.30.0.10:53: read: connection refused I0420 22:52:15.411704 1 tasks_processing.go:74] worker 22 stopped. I0420 22:52:15.411719 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 59.677223ms to process 0 records I0420 22:52:15.418152 1 prometheus_rules.go:88] Prometheus rules successfully created I0420 22:52:15.418882 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0420 22:52:15.418884 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0420 22:52:15.418901 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0420 22:52:15.419510 1 tasks_processing.go:74] worker 8 stopped. E0420 22:52:15.419524 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0420 22:52:15.419530 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0420 22:52:15.419534 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0420 22:52:15.419542 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=ad286723f58bdcfc37aeba3ec5b4110c08e1af59cd34d14b4bfaab02d18e4856 I0420 22:52:15.419577 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0420 22:52:15.419584 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0420 22:52:15.419588 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=2ab5ab7b1b10d7fcf1197bb24dea7c90f400e4effc18c7356873209d54fdf84b I0420 22:52:15.419592 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0420 22:52:15.419629 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0420 22:52:15.419636 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0420 22:52:15.419641 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 67.485587ms to process 7 records I0420 22:52:15.421291 1 tasks_processing.go:74] worker 28 stopped. E0420 22:52:15.421304 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0420 22:52:15.421310 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2pq5oo0ggfpe15f5ol7esv3tsdt0rprv-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2pq5oo0ggfpe15f5ol7esv3tsdt0rprv-primary-cert-bundle-secret" not found I0420 22:52:15.421381 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=69ac5bfc8490b94d470cadb811d755336f799911b33712b4b6605069778226f1 I0420 22:52:15.421397 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 69.013011ms to process 1 records I0420 22:52:15.423406 1 base_controller.go:82] Caches are synced for ConfigController I0420 22:52:15.423419 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0420 22:52:15.427239 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 22:52:15.458047 1 tasks_processing.go:74] worker 24 stopped. I0420 22:52:15.458332 1 recorder.go:75] Recording config/version with fingerprint=d03a21142c7c6ed7e9dae8867283a60d78c01d697775dfdf96a1dd99088bcecf I0420 22:52:15.458349 1 recorder.go:75] Recording config/id with fingerprint=d0df2fddaa8dfa36dd18f862c3d92b64a82e6a43ad481b0352c7b15d32ff3e4c I0420 22:52:15.458356 1 gather.go:177] gatherer "clusterconfig" function "version" took 106.04463ms to process 2 records I0420 22:52:15.482782 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0420 22:52:15.486304 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.10:46957->172.30.0.10:53: read: connection refused I0420 22:52:15.486318 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.10:46957->172.30.0.10:53: read: connection refused I0420 22:52:15.492589 1 base_controller.go:82] Caches are synced for LoggingSyncer I0420 22:52:15.492599 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... W0420 22:52:16.390602 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 22:52:16.597394 1 gather_cluster_operator_pods_and_events.go:121] Found 16 pods with 18 containers I0420 22:52:16.597409 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1398101 bytes I0420 22:52:16.598257 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-s7nlh pod in namespace openshift-dns (previous: false). I0420 22:52:16.819420 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-s7nlh pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-s7nlh\" is waiting to start: ContainerCreating" I0420 22:52:16.819441 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-s7nlh\" is waiting to start: ContainerCreating" I0420 22:52:16.819451 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-s7nlh pod in namespace openshift-dns (previous: false). I0420 22:52:16.821967 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0420 22:52:17.001377 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-s7nlh pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-s7nlh\" is waiting to start: ContainerCreating" I0420 22:52:17.001399 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-s7nlh\" is waiting to start: ContainerCreating" I0420 22:52:17.001413 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-b2zvn pod in namespace openshift-dns (previous: false). I0420 22:52:17.202621 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 22:52:17.202638 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-prmxn pod in namespace openshift-dns (previous: false). W0420 22:52:17.389769 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 22:52:17.424513 1 tasks_processing.go:74] worker 57 stopped. I0420 22:52:17.424563 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=7962b3522c1ffca10bc63c0a40c0ba5ee1ee6478725308c6947ee8743003ecd3 I0420 22:52:17.424608 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=caf8ca97e59d32bf124aefd83da0fd8f2e14fe8187b300d0f63023f80afd23a1 I0420 22:52:17.424636 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0420 22:52:17.424661 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=7537ead59386255302e50111944499e616566ec36a5bcf93c0aba6219c8602c1 I0420 22:52:17.424677 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0420 22:52:17.424701 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=b4f9261b6b6ded7d2d7da097c43fac633fa7747ba35f21eccfdf196443ddb60c I0420 22:52:17.424775 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=9a596120561b706450b27f14e115239f332c8fac91a43d8272d63adac757f5f0 I0420 22:52:17.424796 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for node-resolver-prmxn pod in namespace openshift-dns for failing operator dns-node-resolver (previous: false): "container \"dns-node-resolver\" in pod \"node-resolver-prmxn\" is waiting to start: ContainerCreating" I0420 22:52:17.424810 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=f7054ec04693969cae95abd189baa5f245b53b05eb45a0442e95cee980066085 I0420 22:52:17.424808 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns-node-resolver\" in pod \"node-resolver-prmxn\" is waiting to start: ContainerCreating" I0420 22:52:17.424822 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-v5x95 pod in namespace openshift-dns (previous: false). I0420 22:52:17.424832 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=c9c4f9328d6f96bd4093066ab8ed0979920047b463040a3c28568be237063a1e I0420 22:52:17.424861 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=fc798d61a9ad65470129be36eadcc6391c8d5bb54265bf5bc35015bb69d8303e I0420 22:52:17.424877 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0420 22:52:17.424904 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=1d61c0540d3a599c002712fca4947b9de9f73489e2c31f71e5cbdb9cfbce384e I0420 22:52:17.424920 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0420 22:52:17.424946 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=472d12252fbfe7756ff5f222eea26d06343e9fc7b9cb0757dfc55edff0eb2749 I0420 22:52:17.424961 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0420 22:52:17.424992 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=711a1270781cb6149d215ab8333a84885177a6ef129b9b1ae25eeb511546465a I0420 22:52:17.425007 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0420 22:52:17.425033 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=b071b4bdff3db5812a11e1dcb778d3c8aa35af29969acacca48c2cd52dfdd85b I0420 22:52:17.425221 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=c6ba336314eace1198c4e8202986ad5a5e371559908be9ddd9133c7cc3e81913 I0420 22:52:17.425238 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0420 22:52:17.425251 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0420 22:52:17.425289 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0420 22:52:17.425329 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=862476668d6b73538cf46aee5220f6058dcdc305564dcf4c009a8cfc04000003 I0420 22:52:17.425368 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=c6e29ad5d95efc9079d2b7ddd817caf02086da18c09390bf7efd090b2e088d74 I0420 22:52:17.425382 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0420 22:52:17.425410 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=c3940677bb50c069acaaac8ec4be2ce48c4420bf9cae548301e4de4169ac4b37 I0420 22:52:17.425436 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0420 22:52:17.425460 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=defa41a3bb850f95275fbbba78af4dd1340354f86ca31fd0f2f8f4c58b95f8bd I0420 22:52:17.425485 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=a85e651b48b9b5c080501b1bedb85a1f643b0d6f56e3d336633d2d04c8d59a93 I0420 22:52:17.425511 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=c134d669e158715f780a4944b7424e7869166036b2c38fe1d5f74400cced6898 I0420 22:52:17.425540 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=a013410486c856cc37e4a8bb65bfa0be1e6e34278e98d05038a4821dab9b48d4 I0420 22:52:17.425574 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=f7a3619cc164c92aae119cd3dad44a6126137cd9681a7ad43e040d22978f8892 I0420 22:52:17.425588 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0420 22:52:17.425628 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=3272cf90fdf61f0311180673962a2fbe71e84a1eb975698ed68c12b6128447ae I0420 22:52:17.425655 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0420 22:52:17.425671 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0420 22:52:17.425682 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.070815286s to process 36 records I0420 22:52:17.626200 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for node-resolver-v5x95 pod in namespace openshift-dns for failing operator dns-node-resolver (previous: false): "container \"dns-node-resolver\" in pod \"node-resolver-v5x95\" is waiting to start: ContainerCreating" I0420 22:52:17.626218 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns-node-resolver\" in pod \"node-resolver-v5x95\" is waiting to start: ContainerCreating" I0420 22:52:17.626230 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-5f46bd49f7-j9q98 pod in namespace openshift-image-registry (previous: false). I0420 22:52:17.801013 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-5f46bd49f7-j9q98 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-5f46bd49f7-j9q98\" is waiting to start: ContainerCreating" I0420 22:52:17.801030 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-5f46bd49f7-j9q98\" is waiting to start: ContainerCreating" I0420 22:52:17.801043 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-5f46bd49f7-r85jt pod in namespace openshift-image-registry (previous: false). I0420 22:52:17.996159 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 22:52:17.996174 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7fbc77f694-4znq5 pod in namespace openshift-image-registry (previous: false). I0420 22:52:18.196194 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 22:52:18.196212 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-89gvr pod in namespace openshift-image-registry (previous: false). W0420 22:52:18.389812 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 22:52:18.401665 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for node-ca-89gvr pod in namespace openshift-image-registry for failing operator node-ca (previous: false): "container \"node-ca\" in pod \"node-ca-89gvr\" is waiting to start: ContainerCreating" I0420 22:52:18.401680 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"node-ca\" in pod \"node-ca-89gvr\" is waiting to start: ContainerCreating" I0420 22:52:18.401691 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-g89gp pod in namespace openshift-image-registry (previous: false). I0420 22:52:18.602938 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for node-ca-g89gp pod in namespace openshift-image-registry for failing operator node-ca (previous: false): "container \"node-ca\" in pod \"node-ca-g89gp\" is waiting to start: ContainerCreating" I0420 22:52:18.602951 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"node-ca\" in pod \"node-ca-g89gp\" is waiting to start: ContainerCreating" I0420 22:52:18.602960 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-hl9m2 pod in namespace openshift-image-registry (previous: false). I0420 22:52:18.800049 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 22:52:18.800064 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5d8c47946f-fsqw6 pod in namespace openshift-ingress (previous: false). I0420 22:52:19.000969 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5d8c47946f-fsqw6 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5d8c47946f-fsqw6\" is waiting to start: ContainerCreating" I0420 22:52:19.000982 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5d8c47946f-fsqw6\" is waiting to start: ContainerCreating" I0420 22:52:19.000990 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7946554f8c-ppv4w pod in namespace openshift-ingress (previous: false). I0420 22:52:19.200210 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7946554f8c-ppv4w pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7946554f8c-ppv4w\" is waiting to start: ContainerCreating" I0420 22:52:19.200222 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7946554f8c-ppv4w\" is waiting to start: ContainerCreating" I0420 22:52:19.200231 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7946554f8c-rl5np pod in namespace openshift-ingress (previous: false). W0420 22:52:19.390327 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 22:52:19.395584 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 22:52:19.395597 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-46twf pod in namespace openshift-ingress-canary (previous: false). I0420 22:52:19.601310 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for migrator container migrator-846d9b8bdc-qmg5v pod in namespace openshift-kube-storage-version-migrator (previous: false). I0420 22:52:19.801311 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for graceful-termination container migrator-846d9b8bdc-qmg5v pod in namespace openshift-kube-storage-version-migrator (previous: false). I0420 22:52:20.001761 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-74478b59c6-7lqzw pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0420 22:52:20.203018 1 tasks_processing.go:74] worker 30 stopped. I0420 22:52:20.203098 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=67666f31eab3675a353152a75ac2ed2a7c006b2ac219bfc7ac16b6b9b0c6d909 I0420 22:52:20.203159 1 recorder.go:75] Recording events/openshift-dns with fingerprint=9a431f542b5686a098a9d775287d6df87600b1e33806c77d6beb63a0acbe440a I0420 22:52:20.203237 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=c2affa664639f7f4bdb052b3774337ba639b4744585e28104fe9b2550c5fc4b0 I0420 22:52:20.203265 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=4f8e9f1e042f64a786906b494c5f42bb99f2fdd8382b8ae3c219fecf72c985d3 I0420 22:52:20.203308 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=7649e03271aebe72cee7235b4fcbd9143dcd53c959377ad26327094889fb366d I0420 22:52:20.203328 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=a878f9899cc1b86de9b294520c25037dc16d04945eda708aa34eae3133d1568e I0420 22:52:20.203342 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=34963447bd128e6d8be2b2fb230afb7d32e6588fbf834cc4b2e512ce189d25fa I0420 22:52:20.203387 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=cb691b05febbf4166ad0fcd1f1e180381a9aeb6fcc17ddb0d8c360c8301947ab I0420 22:52:20.203394 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/logs/ingress-canary-46twf/serve-healthcheck-canary_current.log with fingerprint=293fb91ee84066980d13c18594e6e2c4afb25901ce4c8290ae0770a29face7db I0420 22:52:20.203408 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-846d9b8bdc-qmg5v/migrator_current.log with fingerprint=5b26ce094235809218ab3082836434dd4f3db4bba8f2180d3ba629b71b92021a I0420 22:52:20.203414 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-846d9b8bdc-qmg5v/graceful-termination_current.log with fingerprint=cfabfe6b60923d8b7d331282d7f3791932e4950af369fdcd7cbfccc339949de6 I0420 22:52:20.203486 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-74478b59c6-7lqzw/kube-storage-version-migrator-operator_current.log with fingerprint=1eb589e4f02f8fdb4a4a30bfed0cdb5c9fdebb097413b43d82cd85d47f9323db I0420 22:52:20.203496 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.847996866s to process 12 records W0420 22:52:20.386013 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0420 22:52:20.386039 1 tasks_processing.go:74] worker 53 stopped. E0420 22:52:20.386055 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0420 22:52:20.386076 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0420 22:52:20.386090 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0420 22:52:20.386104 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.032466045s to process 1 records I0420 22:52:20.982924 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 22:52:27.993535 1 tasks_processing.go:74] worker 27 stopped. I0420 22:52:27.993573 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0420 22:52:27.993588 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.640568835s to process 1 records I0420 22:52:28.759406 1 tasks_processing.go:74] worker 31 stopped. I0420 22:52:28.759684 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=7ddf795073bc5395a95429a28409923bc71b328da94f586bee093d6f578708e6 I0420 22:52:28.759703 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.406355434s to process 1 records E0420 22:52:28.759764 1 periodic.go:254] "Unhandled Error" err="clusterconfig failed after 13.407s with: function \"machine_healthchecks\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"machines\" failed with an error, function \"machine_configs\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0420 22:52:28.760870 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machines" failed with an error, function "machine_configs" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0420 22:52:28.760886 1 periodic.go:216] Running workloads gatherer I0420 22:52:28.760900 1 tasks_processing.go:45] number of workers: 2 I0420 22:52:28.760909 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 22:52:28.760915 1 tasks_processing.go:71] worker 1 working on workload_info task. I0420 22:52:28.760916 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 22:52:28.760991 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0420 22:52:28.784355 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 21s for image data I0420 22:52:28.786631 1 tasks_processing.go:74] worker 0 stopped. I0420 22:52:28.786646 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 25.624808ms to process 0 records I0420 22:52:28.794961 1 gather_workloads_info.go:387] No image sha256:730d1b6988025bef0daa3a9a5d8467ec4a26b0382cc52f91c3375b4590d3518a (11ms) I0420 22:52:28.802272 1 gather_workloads_info.go:387] No image sha256:a043239802b3eb8b323d285193d2527fad0ecec98ca91d188a3472a2fac8ae04 (7ms) I0420 22:52:28.809255 1 gather_workloads_info.go:387] No image sha256:084aa9b0f8a6d478549dc384d4e66da13ee9b25cc98531da861cc19dee2a9e8f (7ms) I0420 22:52:28.817429 1 gather_workloads_info.go:387] No image sha256:0a4dfb8d4c1b3849319d45b4c54dff26a7238a2c08fcaa121f93073e95ab12e8 (8ms) I0420 22:52:28.824892 1 gather_workloads_info.go:387] No image sha256:48e883a3932aea9457f55cd4628d21397d429d4828f42f95d1c903d9d5395bde (7ms) I0420 22:52:28.832042 1 gather_workloads_info.go:387] No image sha256:6814f5fced0ef219d06374011c68a11a3da788a764a00a69ade435466d9ee240 (7ms) I0420 22:52:28.839002 1 gather_workloads_info.go:387] No image sha256:e3732e356ae2324565c74cae57d5d016917314fa293ddaa3a68ee9ae030c6f07 (7ms) I0420 22:52:28.846035 1 gather_workloads_info.go:387] No image sha256:4733236617781e3469ffffb15e4daaa1f14ea8e1c52b426a3787a4f1f2945424 (7ms) I0420 22:52:28.853157 1 gather_workloads_info.go:387] No image sha256:e8b96d9318b3b8c9ed0afe4e6381f635c6b0c2f20772044ac68001ed49af2c87 (7ms) I0420 22:52:28.859978 1 gather_workloads_info.go:387] No image sha256:5aaea0419169e55832cc27acfe0fe3b9513a343d6bdf71d3da1575ed322245d0 (7ms) I0420 22:52:28.892128 1 gather_workloads_info.go:387] No image sha256:15677f0b70e6aa2dfaf088e45fc1a425c22bf6fda326b8116f87e88b6694dfab (32ms) I0420 22:52:28.993163 1 gather_workloads_info.go:387] No image sha256:25c148fd380b1a9db3f6039d2e0eabc489a954921452391390ba9192b2325678 (101ms) I0420 22:52:29.092514 1 gather_workloads_info.go:387] No image sha256:30e597ec5d6bb96ff70a4f8688c748b659cd4fd5d73d222e8701821d236795c5 (99ms) I0420 22:52:29.192872 1 gather_workloads_info.go:387] No image sha256:90a8ffd9643ebb16a6a8c04bb38cf9ed58903e9d3bf836c68f399193db5edaf6 (100ms) I0420 22:52:29.292105 1 gather_workloads_info.go:387] No image sha256:08c5a78c8a5af04c549e2273aaf4bb452a75bf038d68aa9d01bb2aff66c30e90 (99ms) I0420 22:52:29.391896 1 gather_workloads_info.go:387] No image sha256:5ac9c549d65fc1d8bc900773bebee43e9192bcec1bb5fa46afb4597230c16ac7 (100ms) I0420 22:52:29.493086 1 gather_workloads_info.go:387] No image sha256:d3e4d4d3324f94a97ce5110eb207cf23299ed7c5f1e8d369a6583552efd87f47 (101ms) I0420 22:52:29.593562 1 gather_workloads_info.go:387] No image sha256:5c6d21c3f97366bc7ab57031cc027b67405a684bd804ce364ed5998b0685eaca (100ms) I0420 22:52:29.671520 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 22:52:29.692160 1 gather_workloads_info.go:387] No image sha256:673ebc8cc22c56c8f410e011b2fa950d28cf7b6420e17fdb6580d6cb10523384 (99ms) I0420 22:52:29.792631 1 gather_workloads_info.go:387] No image sha256:ab60623bb32f7e75fca71ef65137731cae347a21c7a4091dfd583fa00732721c (100ms) I0420 22:52:29.871771 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 22:52:29.892035 1 gather_workloads_info.go:387] No image sha256:5ff204630794311b0b37fa7b197b933ad85d76a481bf7cdb3bcbada08f0cdcbf (99ms) I0420 22:52:29.992555 1 gather_workloads_info.go:387] No image sha256:ce138e8cf5b96557b1864ef6f27c2608bdca59be2611804366cef7169c36291e (101ms) I0420 22:52:30.092072 1 gather_workloads_info.go:387] No image sha256:d64bea34bf3e1bb0b3a701c3ff14e66665afc1b050f28124ad7e6888eaec3a81 (100ms) I0420 22:52:30.192426 1 gather_workloads_info.go:387] No image sha256:89277d8d4560d71db88c2dcc67c992a24544ca21810920b609c1d49d53b4a287 (100ms) I0420 22:52:30.293226 1 gather_workloads_info.go:387] No image sha256:5808401268394502d335281ea1a294b07210461b986b58f91d7d1f29c0029c6d (101ms) I0420 22:52:30.391853 1 gather_workloads_info.go:387] No image sha256:55b1db6038c5beaed54c626e3343b7a8589cc0be8dc41d1a66b4deab766ff520 (99ms) I0420 22:52:30.492658 1 gather_workloads_info.go:387] No image sha256:934d8e8c50f3c609b8eea80d1051111fe3d066fe8c65c79572072ae55fcb0a86 (101ms) I0420 22:52:30.594473 1 gather_workloads_info.go:387] No image sha256:637d41f067a5239096fc22b135181cda5113da833f1370e7a73965e83792e93a (102ms) I0420 22:52:30.695002 1 gather_workloads_info.go:387] No image sha256:d9ed66918db4dfba8bd354c9ede4d676449fa0eccef649b3d8945cc1da1c60e3 (101ms) I0420 22:52:30.793257 1 gather_workloads_info.go:387] No image sha256:3bba1358d4a0ae878ff491c0c2cbfffe60649e110b40342b878fe8fa332f8858 (98ms) I0420 22:52:30.892257 1 gather_workloads_info.go:387] No image sha256:000105ef5150e7079b90a613fb9e6193e2a6ef9b1908d2dce44f2395d4fd070f (99ms) I0420 22:52:30.993228 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (101ms) I0420 22:52:31.093596 1 gather_workloads_info.go:387] No image sha256:4b748615a43be416d52c33df03e2cdec89ededded1d6a2ade167c5a955d56e5f (100ms) I0420 22:52:31.191592 1 gather_workloads_info.go:387] No image sha256:5b6212b8f539f08e78417d8a4b7485ca0b4e7927cacd7b752742a28841bc8ccd (98ms) I0420 22:52:31.291926 1 gather_workloads_info.go:387] No image sha256:875c77e5d144f03fb91d8cee0259f6966683ca88d1bf818dbf4652c16b70312c (100ms) I0420 22:52:31.392492 1 gather_workloads_info.go:387] No image sha256:e1ba458cf6f0b3606c90880da72db8ab99cd11040bae84baebe3ff2e0d1ea075 (101ms) I0420 22:52:31.493361 1 gather_workloads_info.go:387] No image sha256:d6fbe0075cbb12bfd287c973704fadf97154c7f73e370733d976a40835e9155a (101ms) I0420 22:52:31.493399 1 tasks_processing.go:74] worker 1 stopped. E0420 22:52:31.493411 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0420 22:52:31.493695 1 recorder.go:75] Recording config/workload_info with fingerprint=886f8d7e50b98e8cfb3caa4e1283894215ca869cd1cedd4ed204589b97851afd I0420 22:52:31.493712 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.732473342s to process 1 records E0420 22:52:31.493739 1 periodic.go:254] "Unhandled Error" err="workloads failed after 2.732s with: function \"workload_info\" failed with an error" I0420 22:52:31.494847 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0420 22:52:31.494861 1 periodic.go:216] Running conditional gatherer I0420 22:52:31.500256 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.21.9/gathering_rules I0420 22:52:31.507635 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.21.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.10:53486->172.30.0.10:53: read: connection refused E0420 22:52:31.507889 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 22:52:31.507961 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0420 22:52:31.512912 1 conditional_gatherer.go:392] cluster version is '4.21.9' E0420 22:52:31.512925 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512930 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512934 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512937 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512940 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512943 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512946 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512948 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 22:52:31.512951 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0420 22:52:31.512966 1 tasks_processing.go:45] number of workers: 3 I0420 22:52:31.512979 1 tasks_processing.go:69] worker 2 listening for tasks. I0420 22:52:31.512983 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0420 22:52:31.512991 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 22:52:31.513003 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 22:52:31.513006 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0420 22:52:31.513018 1 tasks_processing.go:71] worker 1 working on conditional_gatherer_rules task. I0420 22:52:31.513039 1 tasks_processing.go:74] worker 0 stopped. I0420 22:52:31.513070 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0420 22:52:31.513083 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.538µs to process 1 records I0420 22:52:31.513130 1 tasks_processing.go:74] worker 1 stopped. I0420 22:52:31.513164 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0420 22:52:31.513174 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.16µs to process 1 records I0420 22:52:31.513242 1 tasks_processing.go:74] worker 2 stopped. I0420 22:52:31.513254 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 250.247µs to process 0 records I0420 22:52:31.513272 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.21.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.10:53486->172.30.0.10:53: read: connection refused I0420 22:52:31.513290 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0420 22:52:31.533724 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=d2dda7c47f4f526e0ab9f6cbbcbf416381ace58f1eb6c3d1b0131f2de224c4c1 I0420 22:52:31.533846 1 diskrecorder.go:70] Writing 106 records to /var/lib/insights-operator/insights-2026-04-20-225231.tar.gz I0420 22:52:31.540012 1 diskrecorder.go:51] Wrote 106 records to disk in 6ms I0420 22:52:31.540042 1 periodic.go:285] Gathering cluster info every 2h0m0s I0420 22:52:31.540058 1 periodic.go:286] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0420 22:52:43.852555 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 22:53:34.681473 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="8860afb872ebae2466f9e34a229191eba26724aa3445fa2efcfb0cbf08bf26b2") W0420 22:53:34.681506 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was created I0420 22:53:34.681563 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0420 22:53:34.681592 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped"