W0423 22:06:51.289581 1 cmd.go:257] Using insecure, self-signed certificates I0423 22:06:51.803507 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:06:51.803801 1 observer_polling.go:159] Starting file observer I0423 22:06:52.574747 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0423 22:06:52.574975 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0423 22:06:52.575693 1 secure_serving.go:57] Forcing use of http/1.1 only W0423 22:06:52.575735 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0423 22:06:52.575739 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0423 22:06:52.575746 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0423 22:06:52.575748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0423 22:06:52.575752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0423 22:06:52.575756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0423 22:06:52.575812 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0423 22:06:52.583034 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0423 22:06:52.583072 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"2aefc244-c55f-4710-856d-e28589cde935", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0423 22:06:52.596829 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0423 22:06:52.596842 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0423 22:06:52.596849 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0423 22:06:52.596849 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0423 22:06:52.596914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0423 22:06:52.596914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0423 22:06:52.597123 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-1219527167/tls.crt::/tmp/serving-cert-1219527167/tls.key" I0423 22:06:52.597405 1 secure_serving.go:213] Serving securely on [::]:8443 I0423 22:06:52.597433 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0423 22:06:52.606880 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0423 22:06:52.606910 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0423 22:06:52.607022 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0423 22:06:52.620493 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0423 22:06:52.620514 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0423 22:06:52.630483 1 secretconfigobserver.go:119] support secret does not exist I0423 22:06:52.644396 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0423 22:06:52.654569 1 secretconfigobserver.go:119] support secret does not exist I0423 22:06:52.656524 1 recorder.go:161] Pruning old reports every 7h48m13s, max age is 288h0m0s I0423 22:06:52.665405 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0423 22:06:52.665412 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0423 22:06:52.665423 1 insightsreport.go:296] Starting report retriever I0423 22:06:52.665426 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0423 22:06:52.665430 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0423 22:06:52.665445 1 periodic.go:209] Running clusterconfig gatherer I0423 22:06:52.665493 1 tasks_processing.go:45] number of workers: 64 I0423 22:06:52.665517 1 tasks_processing.go:69] worker 2 listening for tasks. I0423 22:06:52.665527 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 22:06:52.665530 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 22:06:52.665533 1 tasks_processing.go:69] worker 13 listening for tasks. I0423 22:06:52.665537 1 tasks_processing.go:69] worker 3 listening for tasks. I0423 22:06:52.665542 1 tasks_processing.go:71] worker 13 working on jaegers task. I0423 22:06:52.665543 1 tasks_processing.go:69] worker 14 listening for tasks. I0423 22:06:52.665543 1 tasks_processing.go:71] worker 0 working on machine_healthchecks task. I0423 22:06:52.665549 1 tasks_processing.go:69] worker 5 listening for tasks. I0423 22:06:52.665554 1 tasks_processing.go:69] worker 46 listening for tasks. I0423 22:06:52.665557 1 tasks_processing.go:69] worker 10 listening for tasks. I0423 22:06:52.665561 1 tasks_processing.go:69] worker 11 listening for tasks. I0423 22:06:52.665556 1 tasks_processing.go:69] worker 45 listening for tasks. I0423 22:06:52.665537 1 tasks_processing.go:69] worker 24 listening for tasks. I0423 22:06:52.665568 1 tasks_processing.go:69] worker 47 listening for tasks. I0423 22:06:52.665568 1 tasks_processing.go:69] worker 25 listening for tasks. I0423 22:06:52.665573 1 tasks_processing.go:69] worker 35 listening for tasks. I0423 22:06:52.665576 1 tasks_processing.go:69] worker 36 listening for tasks. I0423 22:06:52.665579 1 tasks_processing.go:69] worker 48 listening for tasks. I0423 22:06:52.665579 1 tasks_processing.go:69] worker 26 listening for tasks. I0423 22:06:52.665585 1 tasks_processing.go:69] worker 27 listening for tasks. I0423 22:06:52.665587 1 tasks_processing.go:69] worker 37 listening for tasks. I0423 22:06:52.665590 1 tasks_processing.go:69] worker 49 listening for tasks. I0423 22:06:52.665582 1 tasks_processing.go:69] worker 12 listening for tasks. I0423 22:06:52.665595 1 tasks_processing.go:69] worker 29 listening for tasks. I0423 22:06:52.665595 1 tasks_processing.go:69] worker 8 listening for tasks. I0423 22:06:52.665584 1 tasks_processing.go:69] worker 9 listening for tasks. I0423 22:06:52.665591 1 tasks_processing.go:69] worker 28 listening for tasks. I0423 22:06:52.665607 1 tasks_processing.go:69] worker 42 listening for tasks. I0423 22:06:52.665608 1 tasks_processing.go:69] worker 31 listening for tasks. I0423 22:06:52.665609 1 tasks_processing.go:69] worker 6 listening for tasks. I0423 22:06:52.665615 1 tasks_processing.go:69] worker 33 listening for tasks. I0423 22:06:52.665614 1 tasks_processing.go:69] worker 44 listening for tasks. I0423 22:06:52.665620 1 tasks_processing.go:69] worker 34 listening for tasks. I0423 22:06:52.665600 1 tasks_processing.go:69] worker 38 listening for tasks. I0423 22:06:52.665625 1 tasks_processing.go:69] worker 53 listening for tasks. I0423 22:06:52.665628 1 tasks_processing.go:71] worker 2 working on sap_datahubs task. I0423 22:06:52.665630 1 tasks_processing.go:69] worker 60 listening for tasks. I0423 22:06:52.665632 1 tasks_processing.go:69] worker 54 listening for tasks. I0423 22:06:52.665632 1 tasks_processing.go:71] worker 1 working on ingress task. I0423 22:06:52.665638 1 tasks_processing.go:69] worker 61 listening for tasks. I0423 22:06:52.665638 1 tasks_processing.go:69] worker 40 listening for tasks. I0423 22:06:52.665643 1 tasks_processing.go:69] worker 21 listening for tasks. I0423 22:06:52.665641 1 tasks_processing.go:69] worker 55 listening for tasks. I0423 22:06:52.665646 1 tasks_processing.go:69] worker 15 listening for tasks. I0423 22:06:52.665648 1 tasks_processing.go:69] worker 20 listening for tasks. I0423 22:06:52.665651 1 tasks_processing.go:69] worker 23 listening for tasks. I0423 22:06:52.665648 1 tasks_processing.go:69] worker 58 listening for tasks. I0423 22:06:52.665650 1 tasks_processing.go:69] worker 39 listening for tasks. I0423 22:06:52.665656 1 tasks_processing.go:69] worker 18 listening for tasks. I0423 22:06:52.665621 1 tasks_processing.go:69] worker 62 listening for tasks. I0423 22:06:52.665639 1 tasks_processing.go:69] worker 16 listening for tasks. I0423 22:06:52.665600 1 tasks_processing.go:69] worker 50 listening for tasks. I0423 22:06:52.665665 1 tasks_processing.go:69] worker 22 listening for tasks. I0423 22:06:52.665596 1 tasks_processing.go:69] worker 7 listening for tasks. I0423 22:06:52.665609 1 tasks_processing.go:69] worker 51 listening for tasks. I0423 22:06:52.665606 1 tasks_processing.go:69] worker 43 listening for tasks. I0423 22:06:52.665618 1 tasks_processing.go:69] worker 59 listening for tasks. I0423 22:06:52.665620 1 tasks_processing.go:69] worker 52 listening for tasks. I0423 22:06:52.665601 1 tasks_processing.go:69] worker 30 listening for tasks. I0423 22:06:52.665628 1 tasks_processing.go:69] worker 63 listening for tasks. I0423 22:06:52.665623 1 tasks_processing.go:69] worker 41 listening for tasks. I0423 22:06:52.665626 1 tasks_processing.go:69] worker 32 listening for tasks. I0423 22:06:52.665636 1 tasks_processing.go:69] worker 56 listening for tasks. I0423 22:06:52.665641 1 tasks_processing.go:69] worker 57 listening for tasks. I0423 22:06:52.665636 1 tasks_processing.go:69] worker 19 listening for tasks. I0423 22:06:52.665650 1 tasks_processing.go:69] worker 17 listening for tasks. I0423 22:06:52.665545 1 tasks_processing.go:69] worker 4 listening for tasks. I0423 22:06:52.665730 1 tasks_processing.go:71] worker 26 working on config_maps task. I0423 22:06:52.665734 1 tasks_processing.go:71] worker 12 working on storage_classes task. I0423 22:06:52.665737 1 tasks_processing.go:71] worker 29 working on machine_configs task. I0423 22:06:52.665743 1 tasks_processing.go:71] worker 15 working on sap_config task. I0423 22:06:52.665748 1 tasks_processing.go:71] worker 31 working on operators_pods_and_events task. I0423 22:06:52.665751 1 tasks_processing.go:71] worker 37 working on openstack_dataplanedeployments task. I0423 22:06:52.665757 1 tasks_processing.go:71] worker 8 working on machine_autoscalers task. I0423 22:06:52.665783 1 tasks_processing.go:71] worker 45 working on storage_cluster task. I0423 22:06:52.665789 1 tasks_processing.go:71] worker 49 working on clusterroles task. I0423 22:06:52.665796 1 tasks_processing.go:71] worker 44 working on node_logs task. I0423 22:06:52.665806 1 tasks_processing.go:71] worker 5 working on overlapping_namespace_uids task. I0423 22:06:52.665816 1 tasks_processing.go:71] worker 51 working on validating_webhook_configurations task. I0423 22:06:52.665826 1 tasks_processing.go:71] worker 60 working on pod_network_connectivity_checks task. I0423 22:06:52.665843 1 tasks_processing.go:71] worker 3 working on qemu_kubevirt_launcher_logs task. I0423 22:06:52.665738 1 tasks_processing.go:71] worker 4 working on machine_config_pools task. I0423 22:06:52.665891 1 tasks_processing.go:71] worker 54 working on openshift_logging task. I0423 22:06:52.665943 1 tasks_processing.go:71] worker 61 working on container_runtime_configs task. I0423 22:06:52.665952 1 tasks_processing.go:71] worker 24 working on feature_gates task. I0423 22:06:52.665958 1 tasks_processing.go:71] worker 59 working on oauths task. I0423 22:06:52.665983 1 tasks_processing.go:71] worker 9 working on silenced_alerts task. W0423 22:06:52.666009 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:06:52.666022 1 tasks_processing.go:71] worker 9 working on image_registries task. I0423 22:06:52.666104 1 tasks_processing.go:71] worker 46 working on version task. I0423 22:06:52.666111 1 tasks_processing.go:71] worker 30 working on monitoring_persistent_volumes task. I0423 22:06:52.666122 1 tasks_processing.go:71] worker 47 working on metrics task. I0423 22:06:52.666144 1 tasks_processing.go:71] worker 21 working on ingress_certificates task. I0423 22:06:52.666154 1 tasks_processing.go:71] worker 14 working on openshift_machine_api_events task. I0423 22:06:52.665784 1 tasks_processing.go:71] worker 53 working on mutating_webhook_configurations task. I0423 22:06:52.666232 1 tasks_processing.go:71] worker 40 working on machine_sets task. I0423 22:06:52.666257 1 tasks_processing.go:71] worker 27 working on tsdb_status task. I0423 22:06:52.666283 1 tasks_processing.go:71] worker 20 working on pdbs task. W0423 22:06:52.666290 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:06:52.666299 1 tasks_processing.go:71] worker 34 working on olm_operators task. I0423 22:06:52.666305 1 tasks_processing.go:71] worker 23 working on crds task. I0423 22:06:52.666309 1 tasks_processing.go:71] worker 35 working on openstack_controlplanes task. W0423 22:06:52.666145 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:06:52.665793 1 tasks_processing.go:71] worker 33 working on active_alerts task. W0423 22:06:52.666439 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:06:52.665947 1 tasks_processing.go:71] worker 41 working on nodenetworkstates task. I0423 22:06:52.666451 1 tasks_processing.go:71] worker 18 working on aggregated_monitoring_cr_names task. I0423 22:06:52.666471 1 tasks_processing.go:71] worker 38 working on operators task. I0423 22:06:52.666503 1 tasks_processing.go:71] worker 32 working on proxies task. I0423 22:06:52.666538 1 tasks_processing.go:71] worker 63 working on machines task. I0423 22:06:52.666551 1 tasks_processing.go:71] worker 28 working on infrastructures task. I0423 22:06:52.665953 1 tasks_processing.go:71] worker 43 working on container_images task. I0423 22:06:52.666300 1 tasks_processing.go:71] worker 62 working on nodes task. I0423 22:06:52.665940 1 tasks_processing.go:71] worker 25 working on cost_management_metrics_configs task. I0423 22:06:52.667048 1 tasks_processing.go:71] worker 50 working on networks task. I0423 22:06:52.667060 1 tasks_processing.go:71] worker 22 working on authentication task. I0423 22:06:52.667069 1 tasks_processing.go:71] worker 58 working on certificate_signing_requests task. I0423 22:06:52.667101 1 tasks_processing.go:71] worker 16 working on image task. I0423 22:06:52.667104 1 tasks_processing.go:71] worker 7 working on openstack_version task. I0423 22:06:52.667302 1 tasks_processing.go:71] worker 36 working on sap_pods task. I0423 22:06:52.667324 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 25.947µs to process 0 records I0423 22:06:52.665789 1 tasks_processing.go:71] worker 6 working on schedulers task. I0423 22:06:52.667336 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 28.43µs to process 0 records I0423 22:06:52.667343 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 291.06µs to process 0 records I0423 22:06:52.667348 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 28.178µs to process 0 records I0423 22:06:52.667357 1 tasks_processing.go:71] worker 17 working on service_accounts task. I0423 22:06:52.667363 1 tasks_processing.go:74] worker 27 stopped. I0423 22:06:52.667370 1 tasks_processing.go:74] worker 47 stopped. I0423 22:06:52.667224 1 tasks_processing.go:71] worker 39 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0423 22:06:52.667379 1 tasks_processing.go:71] worker 56 working on openstack_dataplanenodesets task. I0423 22:06:52.667379 1 tasks_processing.go:71] worker 48 working on support_secret task. I0423 22:06:52.665739 1 tasks_processing.go:71] worker 55 working on cluster_apiserver task. I0423 22:06:52.665731 1 tasks_processing.go:71] worker 42 working on nodenetworkconfigurationpolicies task. I0423 22:06:52.667438 1 tasks_processing.go:71] worker 10 working on image_pruners task. I0423 22:06:52.667452 1 tasks_processing.go:71] worker 11 working on install_plans task. I0423 22:06:52.667535 1 tasks_processing.go:71] worker 19 working on ceph_cluster task. I0423 22:06:52.667376 1 tasks_processing.go:71] worker 57 working on lokistack task. I0423 22:06:52.667359 1 tasks_processing.go:74] worker 33 stopped. I0423 22:06:52.666107 1 tasks_processing.go:71] worker 52 working on dvo_metrics task. I0423 22:06:52.678913 1 tasks_processing.go:74] worker 2 stopped. I0423 22:06:52.678933 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 13.276144ms to process 0 records I0423 22:06:52.678978 1 tasks_processing.go:74] worker 8 stopped. I0423 22:06:52.678991 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 13.206654ms to process 0 records I0423 22:06:52.679134 1 tasks_processing.go:74] worker 0 stopped. E0423 22:06:52.679147 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0423 22:06:52.679161 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 13.574739ms to process 0 records I0423 22:06:52.683091 1 tasks_processing.go:74] worker 45 stopped. I0423 22:06:52.683108 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 17.285875ms to process 0 records I0423 22:06:52.686438 1 tasks_processing.go:74] worker 13 stopped. I0423 22:06:52.686451 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 20.885084ms to process 0 records I0423 22:06:52.686478 1 tasks_processing.go:74] worker 61 stopped. I0423 22:06:52.686489 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 20.520137ms to process 0 records I0423 22:06:52.686654 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0423 22:06:52.686670 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0423 22:06:52.686674 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0423 22:06:52.686677 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0423 22:06:52.686706 1 controller.go:489] The operator is still being initialized I0423 22:06:52.686717 1 controller.go:512] The operator is healthy I0423 22:06:52.696991 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0423 22:06:52.697009 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0423 22:06:52.697026 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0423 22:06:52.697421 1 tasks_processing.go:74] worker 59 stopped. I0423 22:06:52.698342 1 recorder.go:75] Recording config/oauth with fingerprint=36221b8cd3cbb2d2a755e6d9b67cbde51c3600ee114b7527d70cc4f4ef3a9072 I0423 22:06:52.698370 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 31.450185ms to process 1 records I0423 22:06:52.705321 1 tasks_processing.go:74] worker 40 stopped. I0423 22:06:52.705335 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 39.07294ms to process 0 records I0423 22:06:52.705457 1 tasks_processing.go:74] worker 15 stopped. I0423 22:06:52.705477 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 39.704244ms to process 0 records I0423 22:06:52.705656 1 tasks_processing.go:74] worker 1 stopped. I0423 22:06:52.705790 1 recorder.go:75] Recording config/ingress with fingerprint=e8df5744c344ab3c33052fec4033f99f69c1696d438eaa815e6d420806782f5d I0423 22:06:52.705801 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 40.007851ms to process 1 records I0423 22:06:52.707613 1 base_controller.go:82] Caches are synced for ConfigController I0423 22:06:52.707623 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0423 22:06:52.721751 1 tasks_processing.go:74] worker 37 stopped. I0423 22:06:52.721770 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 55.988649ms to process 0 records I0423 22:06:52.721803 1 tasks_processing.go:74] worker 44 stopped. I0423 22:06:52.721815 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 55.996491ms to process 0 records I0423 22:06:52.721829 1 tasks_processing.go:74] worker 4 stopped. I0423 22:06:52.721838 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 55.940542ms to process 0 records E0423 22:06:52.721847 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0423 22:06:52.721876 1 tasks_processing.go:74] worker 63 stopped. I0423 22:06:52.721880 1 gather.go:177] gatherer "clusterconfig" function "machines" took 55.291676ms to process 0 records I0423 22:06:52.721896 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 55.527442ms to process 0 records I0423 22:06:52.721902 1 gather_logs.go:145] no pods in namespace were found I0423 22:06:52.721910 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 55.684007ms to process 0 records I0423 22:06:52.721916 1 tasks_processing.go:74] worker 35 stopped. I0423 22:06:52.721916 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 56.056622ms to process 0 records I0423 22:06:52.721922 1 tasks_processing.go:74] worker 3 stopped. I0423 22:06:52.721932 1 tasks_processing.go:74] worker 14 stopped. I0423 22:06:52.722095 1 tasks_processing.go:74] worker 42 stopped. I0423 22:06:52.722110 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 54.664878ms to process 0 records E0423 22:06:52.722118 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0423 22:06:52.722124 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 56.276686ms to process 0 records I0423 22:06:52.722129 1 tasks_processing.go:74] worker 60 stopped. I0423 22:06:52.722152 1 tasks_processing.go:74] worker 24 stopped. I0423 22:06:52.722258 1 recorder.go:75] Recording config/featuregate with fingerprint=4c44d7f95cd4f13bf20f89a0892bc7af3e12144d7b5b79e058d5f7f519d1735c I0423 22:06:52.722270 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 56.188381ms to process 1 records I0423 22:06:52.723592 1 tasks_processing.go:74] worker 19 stopped. I0423 22:06:52.723609 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 55.991307ms to process 0 records I0423 22:06:52.723618 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 55.928092ms to process 0 records I0423 22:06:52.723623 1 tasks_processing.go:74] worker 57 stopped. I0423 22:06:52.723739 1 tasks_processing.go:74] worker 54 stopped. I0423 22:06:52.723757 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 57.833729ms to process 0 records I0423 22:06:52.724734 1 tasks_processing.go:74] worker 36 stopped. I0423 22:06:52.724750 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 57.416023ms to process 0 records I0423 22:06:52.724958 1 tasks_processing.go:74] worker 56 stopped. I0423 22:06:52.724972 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 57.570694ms to process 0 records I0423 22:06:52.724995 1 tasks_processing.go:74] worker 41 stopped. I0423 22:06:52.725006 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 58.541397ms to process 0 records I0423 22:06:52.725096 1 tasks_processing.go:74] worker 20 stopped. I0423 22:06:52.725254 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=5f8e7146f9ad5b37eb697d667349613890e1e9997ad36b129d2d9d6cc01c89ce I0423 22:06:52.725299 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=2b6eec0ce594227443e0272af505ce9e7080f0a0288f91f76fbd161553f7fc74 I0423 22:06:52.725332 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=788619520ada58ddfbbd90592c2de0c734e9bef3490a9ec96d869da8f286228a I0423 22:06:52.725347 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 58.794869ms to process 3 records I0423 22:06:52.725356 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 58.312033ms to process 0 records I0423 22:06:52.725363 1 tasks_processing.go:74] worker 25 stopped. I0423 22:06:52.725748 1 tasks_processing.go:74] worker 28 stopped. I0423 22:06:52.726718 1 recorder.go:75] Recording config/infrastructure with fingerprint=86c12a459109292ed83ae8841cac9dfa9911657d7e3c27aedea2e2e7cfb37755 I0423 22:06:52.726736 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 59.176984ms to process 1 records I0423 22:06:52.726750 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 58.772037ms to process 0 records I0423 22:06:52.726759 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 59.777487ms to process 0 records I0423 22:06:52.726825 1 tasks_processing.go:74] worker 7 stopped. I0423 22:06:52.726832 1 tasks_processing.go:74] worker 30 stopped. I0423 22:06:52.726853 1 recorder.go:75] Recording config/image with fingerprint=d2ca3e0723c18a99b4f405057007bfdcb4227730303f2f95d4c1e5ac0f9e2ceb I0423 22:06:52.726880 1 tasks_processing.go:74] worker 16 stopped. I0423 22:06:52.726886 1 gather.go:177] gatherer "clusterconfig" function "image" took 59.290234ms to process 1 records I0423 22:06:52.726900 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0423 22:06:52.726911 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 60.601495ms to process 1 records I0423 22:06:52.726932 1 tasks_processing.go:74] worker 5 stopped. I0423 22:06:52.726973 1 recorder.go:75] Recording config/proxy with fingerprint=f465616fd6dc33f7e3f6de83f005e161468c58dda87973b179656cb72f902d74 I0423 22:06:52.726983 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 60.090707ms to process 1 records I0423 22:06:52.726991 1 tasks_processing.go:74] worker 32 stopped. I0423 22:06:52.727216 1 tasks_processing.go:74] worker 9 stopped. I0423 22:06:52.727830 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=bba15dd9ecc36bfda9088fd316a8f69022a05fa63365252132de48d5ca1cee9a I0423 22:06:52.727875 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 61.182363ms to process 1 records I0423 22:06:52.727961 1 tasks_processing.go:74] worker 50 stopped. I0423 22:06:52.728628 1 recorder.go:75] Recording config/network with fingerprint=f6d38ff8d412fdd684fbc5d7542e770e4765d0be464fadc1f9218d762b461b5f I0423 22:06:52.728705 1 gather.go:177] gatherer "clusterconfig" function "networks" took 60.331774ms to process 1 records I0423 22:06:52.729688 1 tasks_processing.go:74] worker 53 stopped. I0423 22:06:52.732103 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 22:06:52.732103 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=e52a7c5cf2d80c2f46cfd7255202a4e5a5e6347428a48fa032ac491673f24ad3 I0423 22:06:52.732185 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=1134908cf6da6bea95f8ff5cd24d05365d4599dde2de490f09535428c356d7bd I0423 22:06:52.732228 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=f451b0bfd30efe9db09182d628cbed533b46249a962ead62995a9c02e37a534d I0423 22:06:52.732244 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 63.43947ms to process 3 records I0423 22:06:52.732308 1 tasks_processing.go:74] worker 6 stopped. I0423 22:06:52.732330 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=dee06b7f04ab297f3dacbe252d2fd59205c374729b2a139a0070f55a8582d690 I0423 22:06:52.732338 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 62.628645ms to process 1 records I0423 22:06:52.732419 1 tasks_processing.go:74] worker 62 stopped. I0423 22:06:52.732635 1 recorder.go:75] Recording config/node/ip-10-0-0-43.ec2.internal with fingerprint=51632f7ccfb268f88a9f293796eb7288a2e676875b55749601e1289023d87a6d I0423 22:06:52.732691 1 recorder.go:75] Recording config/node/ip-10-0-1-250.ec2.internal with fingerprint=6a59be192315619a6df687c24de7e9c30ad39e338600dbfdeee5b6f19d2f0b28 I0423 22:06:52.732742 1 recorder.go:75] Recording config/node/ip-10-0-2-252.ec2.internal with fingerprint=94f5cf292845df4b52f28b3b3eaa0de2caad811430e3f0b8e2541633e0e4948f I0423 22:06:52.732750 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 65.403866ms to process 3 records I0423 22:06:52.732825 1 tasks_processing.go:74] worker 51 stopped. I0423 22:06:52.732837 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=1801e8be2b88dcc42e5b1a01d2a60e07b11cf789bce9edc756afd124534ccbe0 I0423 22:06:52.732944 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=6a38e743ae9fdc920c9cab8d665348f04e0f5a1e81fb81381c163aa2d641feba I0423 22:06:52.732967 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=2f60e06dba8b07db58ae9b60491291b0aef0a03ce591eea652ff1e664ba0fdbb I0423 22:06:52.732990 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=e68acdbb4a6c3fd7ea533a10ca42de10968cc49fe9b884c6e3bd108c00769e61 I0423 22:06:52.733014 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=a876989faaad90d1cbd999e94f3902119ee4677f15a92e7a579655088989432a I0423 22:06:52.733051 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=9991abc7a2955e82728ee7fdcd8e130933ab4ce73ec841572b3483d8d5bb64d3 I0423 22:06:52.733082 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=35d475dd0587b44b925ce9d7bdac5af3ce20b376456600435d4f72eb7fab075f I0423 22:06:52.733115 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=e211a3f46eac38078fa56b98ae5aa5ceeea329a350fc522d7fe53fc5dac87966 I0423 22:06:52.733138 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=6bac769d5743be911efce5cf5823d6fe6ee7cf1adf8002447c221f916350ba24 I0423 22:06:52.733162 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=54e9a02593b78dd947bf51a4f3faa1f8ca0ccd413cd66a508b67cf079ca31a1d I0423 22:06:52.733190 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=e4dbb2ba8c4cb52cbfb84a4c487019d58d22f59efe29914240a9cf9b02c38ea8 I0423 22:06:52.733197 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 66.371214ms to process 11 records I0423 22:06:52.733282 1 tasks_processing.go:74] worker 55 stopped. I0423 22:06:52.733309 1 recorder.go:75] Recording config/apiserver with fingerprint=82e182191a2fda153412ffe4246177fca900a6bc266e58f098e24d93a1a09aa0 I0423 22:06:52.733318 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 64.904451ms to process 1 records I0423 22:06:52.733581 1 tasks_processing.go:74] worker 58 stopped. I0423 22:06:52.733597 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 66.495441ms to process 0 records I0423 22:06:52.736932 1 tasks_processing.go:74] worker 10 stopped. I0423 22:06:52.737030 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=6fcc970a4cba38608a703efb32ade4153275c0df61fd07b6329680736200905f I0423 22:06:52.737049 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 69.464556ms to process 1 records I0423 22:06:52.737204 1 tasks_processing.go:74] worker 22 stopped. I0423 22:06:52.737298 1 recorder.go:75] Recording config/authentication with fingerprint=9a7c9cb5bc1cdc414d083a78d5689a4e13c201827769a631c59e996711adcc90 I0423 22:06:52.737314 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 69.885547ms to process 1 records I0423 22:06:52.737409 1 tasks_processing.go:74] worker 48 stopped. E0423 22:06:52.737419 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0423 22:06:52.737424 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 70.019609ms to process 0 records I0423 22:06:52.738127 1 tasks_processing.go:74] worker 18 stopped. I0423 22:06:52.738141 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 71.66308ms to process 0 records I0423 22:06:52.739269 1 tasks_processing.go:74] worker 23 stopped. I0423 22:06:52.739421 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0423 22:06:52.739494 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0423 22:06:52.739575 1 operator.go:288] started I0423 22:06:52.739607 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0423 22:06:52.739850 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=a6355b4a9d7146c579df3e45113ae796040ea62940dd4dfdd01adc8272f83d50 I0423 22:06:52.740091 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=96c2d9979db5ff34f6c2f815b03c50c0e6eebcac492910b35a06848904e8ad6f I0423 22:06:52.740101 1 gather.go:177] gatherer "clusterconfig" function "crds" took 72.951631ms to process 2 records I0423 22:06:52.748991 1 tasks_processing.go:74] worker 12 stopped. I0423 22:06:52.749097 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=227f900d67ac67bcaee867adb916a356f6b1b450558311b7de8f3b8d5c9d149c I0423 22:06:52.749116 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=1c39337d3ffe13e9c6e573733d736c35cc70736ea841136d874a67702b3235fb I0423 22:06:52.749122 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 83.242793ms to process 2 records I0423 22:06:52.749209 1 tasks_processing.go:74] worker 49 stopped. I0423 22:06:52.749271 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=7824931f53c0fdd24d16686996e66302108c27d75714847fae46e90f2f912406 I0423 22:06:52.749354 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=6b6c28c8c9b80747ebe1680398146bb7306e6821ea3693ad3a339b24e66ee014 I0423 22:06:52.749362 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 83.283384ms to process 2 records I0423 22:06:52.749410 1 tasks_processing.go:74] worker 34 stopped. I0423 22:06:52.749484 1 recorder.go:75] Recording config/olm_operators with fingerprint=597d5d75f2b5f7ba247cd152d36d83cacc00824e1c956197e0f6023ba9efb604 I0423 22:06:52.749496 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 83.096208ms to process 1 records I0423 22:06:52.750552 1 tasks_processing.go:74] worker 43 stopped. I0423 22:06:52.752699 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-0-43.ec2.internal with fingerprint=75e4a7b468334561f89a2de0bf6ff77ff59d0315c387cd73680941455ecea7a7 I0423 22:06:52.752921 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-1-250.ec2.internal with fingerprint=14fa1f26435e451e09a23b62ca692164ab96c3693528b0c44b53bde198f3d436 I0423 22:06:52.753100 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-2-252.ec2.internal with fingerprint=efec346fb508d32cab8feea553b8f85905d0cc868e2b5fab3912fdc8aa9fe674 I0423 22:06:52.753235 1 recorder.go:75] Recording config/running_containers with fingerprint=3928d5e36cfa75a5a387a8cf283daf4a749349ae87c3dc5afa520d6ed78150f6 I0423 22:06:52.753269 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 83.857106ms to process 4 records I0423 22:06:52.754388 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0423 22:06:52.754402 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0423 22:06:52.754406 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0423 22:06:52.754410 1 controller.go:212] Source scaController *sca.Controller is not ready I0423 22:06:52.754413 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0423 22:06:52.754430 1 controller.go:489] The operator is still being initialized I0423 22:06:52.754437 1 controller.go:512] The operator is healthy W0423 22:06:52.754935 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 22:06:52.757389 1 tasks_processing.go:74] worker 39 stopped. I0423 22:06:52.757402 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 90.001376ms to process 0 records I0423 22:06:52.759490 1 prometheus_rules.go:88] Prometheus rules successfully created E0423 22:06:52.761976 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27788f6198-240d-4d54-a3dc-6491671ba355%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:35759->172.30.0.10:53: read: connection refused I0423 22:06:52.761988 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27788f6198-240d-4d54-a3dc-6491671ba355%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:35759->172.30.0.10:53: read: connection refused I0423 22:06:52.769258 1 tasks_processing.go:74] worker 26 stopped. E0423 22:06:52.769272 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0423 22:06:52.769278 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0423 22:06:52.769282 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0423 22:06:52.769291 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0423 22:06:52.769317 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0423 22:06:52.769322 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0423 22:06:52.769326 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0423 22:06:52.769329 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0423 22:06:52.769378 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0423 22:06:52.769386 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0423 22:06:52.769391 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 103.511414ms to process 7 records I0423 22:06:52.782149 1 tasks_processing.go:74] worker 21 stopped. E0423 22:06:52.782162 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0423 22:06:52.782167 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ps458q9kb0u3rgtnds4j5qiugl2ol78-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ps458q9kb0u3rgtnds4j5qiugl2ol78-primary-cert-bundle-secret" not found I0423 22:06:52.782213 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=4c49ce910a8644733bc709abe7d81b2ee7799096cd404d5103ff3b4ec6118eb3 I0423 22:06:52.782225 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 115.980473ms to process 1 records I0423 22:06:52.802098 1 tasks_processing.go:74] worker 46 stopped. I0423 22:06:52.802363 1 recorder.go:75] Recording config/version with fingerprint=79bd39cf88ea9255b3eeaf2e63555f88b64324d13de72b2e401d7525c6887910 I0423 22:06:52.802376 1 recorder.go:75] Recording config/id with fingerprint=49612072e403fce959d9fa4e9aed8636794268f9a2eaa4b60acace06124e12fd I0423 22:06:52.802383 1 gather.go:177] gatherer "clusterconfig" function "version" took 135.975466ms to process 2 records I0423 22:06:52.810299 1 tasks_processing.go:74] worker 29 stopped. I0423 22:06:52.810325 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0423 22:06:52.810333 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 144.547666ms to process 1 records I0423 22:06:52.820413 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0423 22:06:52.825243 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:42215->172.30.0.10:53: read: connection refused I0423 22:06:52.825258 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:42215->172.30.0.10:53: read: connection refused I0423 22:06:52.839701 1 base_controller.go:82] Caches are synced for LoggingSyncer I0423 22:06:52.839715 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0423 22:06:53.139193 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0423 22:06:53.139209 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0423 22:06:53.139765 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-55t8s pod in namespace openshift-dns (previous: false). I0423 22:06:53.408831 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-55t8s pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-55t8s\" is waiting to start: ContainerCreating" I0423 22:06:53.408850 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-55t8s\" is waiting to start: ContainerCreating" I0423 22:06:53.408872 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-55t8s pod in namespace openshift-dns (previous: false). I0423 22:06:53.544033 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-55t8s pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-55t8s\" is waiting to start: ContainerCreating" I0423 22:06:53.544050 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-55t8s\" is waiting to start: ContainerCreating" I0423 22:06:53.544080 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-7bjnh pod in namespace openshift-dns (previous: false). W0423 22:06:53.753109 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 22:06:53.764205 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-7bjnh pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-7bjnh\" is waiting to start: ContainerCreating" I0423 22:06:53.764220 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-7bjnh\" is waiting to start: ContainerCreating" I0423 22:06:53.764228 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-7bjnh pod in namespace openshift-dns (previous: false). I0423 22:06:53.945275 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-7bjnh pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-7bjnh\" is waiting to start: ContainerCreating" I0423 22:06:53.945294 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-7bjnh\" is waiting to start: ContainerCreating" I0423 22:06:53.945332 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-lvpwr pod in namespace openshift-dns (previous: false). I0423 22:06:54.160908 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0423 22:06:54.168768 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-lvpwr pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-lvpwr\" is waiting to start: ContainerCreating" I0423 22:06:54.168784 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-lvpwr\" is waiting to start: ContainerCreating" I0423 22:06:54.168796 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-lvpwr pod in namespace openshift-dns (previous: false). I0423 22:06:54.369545 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-lvpwr pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-lvpwr\" is waiting to start: ContainerCreating" I0423 22:06:54.369563 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-lvpwr\" is waiting to start: ContainerCreating" I0423 22:06:54.369577 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-24qqd pod in namespace openshift-dns (previous: false). I0423 22:06:54.563138 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:54.563156 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-6jtf4 pod in namespace openshift-dns (previous: false). I0423 22:06:54.746794 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:54.746814 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-sd48r pod in namespace openshift-dns (previous: false). W0423 22:06:54.753386 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 22:06:54.763355 1 tasks_processing.go:74] worker 38 stopped. I0423 22:06:54.763418 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=abb84692327afbb5bd486f0584ae822c12edc5f101885453ee1e7bdc5a8026b3 I0423 22:06:54.763455 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=18e12be9c545b6f16b022747f6d16e806d11cb51342e1d870e79442d8956a126 I0423 22:06:54.763504 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0423 22:06:54.763532 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=cb01ba18a6fd0899c6aaeab31d367be470098c1329923fa1f5437a13a6b16fce I0423 22:06:54.763549 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0423 22:06:54.763569 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=08d6f4ec5c3f547f495e61461c55f72fb267e0d5dc4de57bc802030b69f0bcdb I0423 22:06:54.763598 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=ed6a043b1c2567f0bb52aaa24b0be5a66035babc70eb0f5569786c303d4b8438 I0423 22:06:54.763622 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=5b1b431850173912964dc799ceb2c0e65cce91477a4aa8138d08aefd5b287719 I0423 22:06:54.763636 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=94e547a020d09e21d88e168d70bf28b7b1a4acc19af302442f84271d108594c5 I0423 22:06:54.763653 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=0e089c90b3f7a91411d58bcae8e99a449a9bad9e2b7d88ee2976e000c2cd0ad3 I0423 22:06:54.763662 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0423 22:06:54.763677 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=96481ae85cfef63861491b3635315f2f9873a4d79f771c10f8e4b8581debcb24 I0423 22:06:54.763688 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0423 22:06:54.763705 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=dcbed00d3ed3ca83c9f7c3f0f6a50a674c353072dd46b05a98f5e4dfe6adf95c I0423 22:06:54.763715 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0423 22:06:54.763727 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=a2662ec2eedce73aee5910d898c22df020e59f8bbccee28ef2d448c7e6de389b I0423 22:06:54.763735 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0423 22:06:54.763750 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=ab01b246039f5ee6a1a37b840e212d7da3b25a125606dbece396f87d2f7aa39e I0423 22:06:54.763881 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=cb94b96b7529d19c5eea8311b817a4411421b668041eda422e5c09d40d71df5d I0423 22:06:54.763892 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0423 22:06:54.763900 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0423 22:06:54.763921 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0423 22:06:54.763943 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=c5e06851262186688df9296884f0b70e27c58d4da0b8f4011d66757e3ff45cb8 I0423 22:06:54.763967 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=2eaffb80da754a41834f2b025c9ec5159ae162860b99faf9622115e7169e1ea2 I0423 22:06:54.763977 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0423 22:06:54.763991 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=2ab37da26ab7af42d9cf72c83a93bde0aabaaceea2243f285307f0df29eba554 I0423 22:06:54.764001 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0423 22:06:54.764012 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=82aca9aaf772903638466ce0f1357be592565a3afcf2a4e242a8b106dc2d3b27 I0423 22:06:54.764026 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=dacba8f18b0ed3af7f2e50d0b3a9d1f57888765746ff5857941c2b9aa9fbe1b9 I0423 22:06:54.764043 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=664d909236f18851165ce6c0d1c19e10611af16596faad5a62580a395113b114 I0423 22:06:54.764058 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=cdc8bf99bac9ebf14da0a11b6121495cc41771fb5368b5c8205a43af092e1d22 I0423 22:06:54.764091 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=68a97221adde71921af83c405081b2b833192e7d50ccf9313e6765944b2ff57f I0423 22:06:54.764100 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0423 22:06:54.764124 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=434a54a2aa032de7fc8fe2abc49f99a747f966dfb6c9683897b35fa8efbb0671 I0423 22:06:54.764141 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0423 22:06:54.764149 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0423 22:06:54.764157 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.09685894s to process 36 records I0423 22:06:54.943619 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:54.943678 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-75d8fff8f7-8pf6f pod in namespace openshift-image-registry (previous: false). I0423 22:06:55.144210 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-75d8fff8f7-8pf6f pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-75d8fff8f7-8pf6f\" is waiting to start: ContainerCreating" I0423 22:06:55.144228 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-75d8fff8f7-8pf6f\" is waiting to start: ContainerCreating" I0423 22:06:55.144274 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-75d8fff8f7-f5mwb pod in namespace openshift-image-registry (previous: false). I0423 22:06:55.344116 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-75d8fff8f7-f5mwb pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-75d8fff8f7-f5mwb\" is waiting to start: ContainerCreating" I0423 22:06:55.344133 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-75d8fff8f7-f5mwb\" is waiting to start: ContainerCreating" I0423 22:06:55.344201 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-b997f8d7d-9c87k pod in namespace openshift-image-registry (previous: false). I0423 22:06:55.544748 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-b997f8d7d-9c87k pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-b997f8d7d-9c87k\" is waiting to start: ContainerCreating" I0423 22:06:55.544766 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-b997f8d7d-9c87k\" is waiting to start: ContainerCreating" I0423 22:06:55.544779 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-74r5q pod in namespace openshift-image-registry (previous: false). I0423 22:06:55.743709 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:55.743727 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-v764s pod in namespace openshift-image-registry (previous: false). W0423 22:06:55.753080 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 22:06:55.944020 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:55.944042 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-wgtrc pod in namespace openshift-image-registry (previous: false). I0423 22:06:56.145037 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 22:06:56.145068 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-57555f54ff-nzn77 pod in namespace openshift-ingress (previous: false). I0423 22:06:56.343967 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-57555f54ff-nzn77 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-57555f54ff-nzn77\" is waiting to start: ContainerCreating" I0423 22:06:56.343984 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-57555f54ff-nzn77\" is waiting to start: ContainerCreating" I0423 22:06:56.343994 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-848ccd559f-nf8v2 pod in namespace openshift-ingress (previous: false). I0423 22:06:56.546282 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-848ccd559f-nf8v2 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-848ccd559f-nf8v2\" is waiting to start: ContainerCreating" I0423 22:06:56.546301 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-848ccd559f-nf8v2\" is waiting to start: ContainerCreating" I0423 22:06:56.546314 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-848ccd559f-spqct pod in namespace openshift-ingress (previous: false). I0423 22:06:56.746454 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-848ccd559f-spqct pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-848ccd559f-spqct\" is waiting to start: ContainerCreating" I0423 22:06:56.746478 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-848ccd559f-spqct\" is waiting to start: ContainerCreating" I0423 22:06:56.746506 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-59ptb pod in namespace openshift-ingress-canary (previous: false). W0423 22:06:56.752901 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 22:06:56.953774 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-59ptb pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-59ptb\" is waiting to start: ContainerCreating" I0423 22:06:56.953794 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-59ptb\" is waiting to start: ContainerCreating" I0423 22:06:56.953823 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-lvkxp pod in namespace openshift-ingress-canary (previous: false). I0423 22:06:57.147993 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-lvkxp pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-lvkxp\" is waiting to start: ContainerCreating" I0423 22:06:57.148015 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-lvkxp\" is waiting to start: ContainerCreating" I0423 22:06:57.148042 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-njxp6 pod in namespace openshift-ingress-canary (previous: false). I0423 22:06:57.358918 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-njxp6 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-njxp6\" is waiting to start: ContainerCreating" I0423 22:06:57.358938 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-njxp6\" is waiting to start: ContainerCreating" I0423 22:06:57.358956 1 tasks_processing.go:74] worker 31 stopped. I0423 22:06:57.359107 1 recorder.go:75] Recording events/openshift-dns with fingerprint=01b0b4af54f2fd83972a3f9d0ada3716258a5985b26bab4507f22d59fb06624b I0423 22:06:57.359218 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=6a2e8b015de21f27215843c78f829e288b967cc0628fb7b36e4ffd4b08b2d699 I0423 22:06:57.359246 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=f0d20b61a9419d3cdd22a7d1306f66665148ab6981c8d9ae7cd62e9334f024d5 I0423 22:06:57.359293 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=2ae51748c7e995427f314b67c2b311587a7e64ecb09fda90cf34bd3899d44ad6 I0423 22:06:57.359309 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=1227f2f6047f5297c583466931eeeae3146d74ef51d2a3ab08af0073541daae9 I0423 22:06:57.359441 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-55t8s with fingerprint=9c604a86cb9ba16ebc64d7d2c7805ce062071d03d9f3bc341ca0f8bc81527e9c I0423 22:06:57.359525 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-7bjnh with fingerprint=0c8409bbbcccfa680d045d3abd9ce5fa73d42975f5434b35df95d8946e544607 I0423 22:06:57.359595 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-lvpwr with fingerprint=0559a8c31590d84fa042357eb0201e1df11ee1e6784a9489a7f6f82064608329 I0423 22:06:57.359699 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-75d8fff8f7-8pf6f with fingerprint=7c33ab4411b5c222a93880fe771fd915bcd05c7688c7fefe8a2c334e3ff5adeb I0423 22:06:57.359787 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-75d8fff8f7-f5mwb with fingerprint=c1f36f64dbf7122b3d8dcd035de6850635adcf5eff4d6b3d58dd95e8c5136c95 I0423 22:06:57.359891 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-b997f8d7d-9c87k with fingerprint=90bc882297cac4e0bd09e691876cec42e3d73c30e58fd5d4745c58becadc4b0a I0423 22:06:57.359949 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-59ptb with fingerprint=fa18c4be47f9a0946e28e6369c25bbee6a9a3ae34a0d9bc833bcd72aa86f6870 I0423 22:06:57.360002 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-lvkxp with fingerprint=9498a69ff37e0dc103b30ceb08f5a6db95897f3212e4923efc50db5f0af3d519 I0423 22:06:57.360058 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-njxp6 with fingerprint=bb0efb280f6df30b21b4fb63e6131c404ee9dc3903973ce7a770ef41b88b311d I0423 22:06:57.360067 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.693195762s to process 14 records W0423 22:06:57.749914 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0423 22:06:57.749942 1 tasks_processing.go:74] worker 52 stopped. E0423 22:06:57.749955 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0423 22:06:57.749966 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0423 22:06:57.749982 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0423 22:06:57.749993 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.082207302s to process 1 records I0423 22:07:03.669833 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 22:07:05.411678 1 tasks_processing.go:74] worker 11 stopped. I0423 22:07:05.411725 1 recorder.go:75] Recording config/installplans with fingerprint=f17dbfacc3bfddf27ca3b213b39495434cd4c4e9e3dbd69566ffb3845bbcf539 I0423 22:07:05.411737 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.744204593s to process 1 records I0423 22:07:06.073875 1 tasks_processing.go:74] worker 17 stopped. I0423 22:07:06.074132 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=5bfe903b1da30878a989600cf95950087bbfed0fc0418c904f1a71b56765f5ff I0423 22:07:06.074149 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.406482694s to process 1 records E0423 22:07:06.074213 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.408s with: function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0423 22:07:06.075319 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0423 22:07:06.075333 1 periodic.go:209] Running workloads gatherer I0423 22:07:06.075348 1 tasks_processing.go:45] number of workers: 2 I0423 22:07:06.075358 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 22:07:06.075363 1 tasks_processing.go:71] worker 1 working on helmchart_info task. I0423 22:07:06.075373 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 22:07:06.075391 1 tasks_processing.go:71] worker 0 working on workload_info task. I0423 22:07:06.100101 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0423 22:07:06.102566 1 tasks_processing.go:74] worker 1 stopped. I0423 22:07:06.102581 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 27.191948ms to process 0 records I0423 22:07:06.108487 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (9ms) I0423 22:07:06.116845 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (8ms) I0423 22:07:06.125409 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (9ms) I0423 22:07:06.133786 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (8ms) I0423 22:07:06.142578 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (9ms) I0423 22:07:06.150904 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (8ms) I0423 22:07:06.159510 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (9ms) I0423 22:07:06.167897 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (8ms) I0423 22:07:06.177179 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (9ms) I0423 22:07:06.185578 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (8ms) I0423 22:07:06.196669 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 22:07:06.208389 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (23ms) I0423 22:07:06.309257 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (101ms) I0423 22:07:06.396659 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 22:07:06.411505 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (102ms) I0423 22:07:06.512729 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (101ms) I0423 22:07:06.609999 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (97ms) I0423 22:07:06.709927 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0423 22:07:06.811219 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (101ms) I0423 22:07:06.911070 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (100ms) I0423 22:07:07.009566 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (98ms) I0423 22:07:07.109724 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (100ms) I0423 22:07:07.211801 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (102ms) I0423 22:07:07.310464 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (99ms) I0423 22:07:07.408910 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (98ms) I0423 22:07:07.511511 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (103ms) I0423 22:07:07.612113 1 gather_workloads_info.go:387] No image sha256:ce98d5d844bfc2ba8de1893866ad38166c95157d54abd8192b181e819bc50bb5 (101ms) I0423 22:07:07.709726 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (98ms) I0423 22:07:07.809453 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (100ms) I0423 22:07:07.909525 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (100ms) I0423 22:07:08.012611 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (103ms) I0423 22:07:08.110242 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (98ms) I0423 22:07:08.209569 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (99ms) I0423 22:07:08.309433 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (100ms) I0423 22:07:08.411845 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (102ms) I0423 22:07:08.411907 1 tasks_processing.go:74] worker 0 stopped. E0423 22:07:08.411919 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0423 22:07:08.412264 1 recorder.go:75] Recording config/workload_info with fingerprint=09ea2ea075f9971257760112f45c7985bfd7f4f8b7bded772b47c9cabd7d7c74 I0423 22:07:08.412284 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.336503898s to process 1 records E0423 22:07:08.412309 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.336s with: function \"workload_info\" failed with an error" I0423 22:07:08.413415 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0423 22:07:08.413429 1 periodic.go:209] Running conditional gatherer I0423 22:07:08.419297 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0423 22:07:08.425549 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:56940->172.30.0.10:53: read: connection refused E0423 22:07:08.425780 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 22:07:08.425837 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0423 22:07:08.431817 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0423 22:07:08.431831 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431836 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431839 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431842 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431845 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431848 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431851 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431854 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 22:07:08.431871 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0423 22:07:08.431886 1 tasks_processing.go:45] number of workers: 3 I0423 22:07:08.431898 1 tasks_processing.go:69] worker 2 listening for tasks. I0423 22:07:08.431902 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0423 22:07:08.431900 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 22:07:08.431915 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 22:07:08.431918 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0423 22:07:08.431924 1 tasks_processing.go:74] worker 1 stopped. I0423 22:07:08.431938 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0423 22:07:08.431971 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0423 22:07:08.431983 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 880ns to process 1 records I0423 22:07:08.432013 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0423 22:07:08.432022 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.433µs to process 1 records I0423 22:07:08.432027 1 tasks_processing.go:74] worker 0 stopped. I0423 22:07:08.432200 1 tasks_processing.go:74] worker 2 stopped. I0423 22:07:08.432212 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 246.679µs to process 0 records I0423 22:07:08.432232 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:56940->172.30.0.10:53: read: connection refused I0423 22:07:08.432250 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0423 22:07:08.455252 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=c0109ca1653824cb53fc4e8a704c32c953649c15fd9dbf1e117da1f82cd7894e I0423 22:07:08.455419 1 diskrecorder.go:70] Writing 112 records to /var/lib/insights-operator/insights-2026-04-23-220708.tar.gz I0423 22:07:08.463126 1 diskrecorder.go:51] Wrote 112 records to disk in 7ms I0423 22:07:08.463158 1 periodic.go:278] Gathering cluster info every 2h0m0s I0423 22:07:08.463178 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0423 22:07:16.174513 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 22:08:21.804706 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="d5f82af1d65fbdeaf743493208a562c17fd9d1d2aaf8d5a95dcbdafb80736207") W0423 22:08:21.804739 1 builder.go:160] Restart triggered because of file /var/run/configmaps/service-ca-bundle/service-ca.crt was created I0423 22:08:21.804798 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="5697e88b1f6220d794e53e90e37aed516b13e8050372c3069dc930c0b7e2c75f") I0423 22:08:21.804817 1 periodic.go:170] Shutting down