W0423 21:00:13.152932 1 cmd.go:257] Using insecure, self-signed certificates I0423 21:00:13.818116 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:13.818427 1 observer_polling.go:159] Starting file observer I0423 21:00:14.817012 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0423 21:00:14.817321 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0423 21:00:14.817746 1 secure_serving.go:57] Forcing use of http/1.1 only W0423 21:00:14.817766 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0423 21:00:14.817770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0423 21:00:14.817774 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0423 21:00:14.817777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0423 21:00:14.817779 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0423 21:00:14.817781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0423 21:00:14.818259 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0423 21:00:14.822878 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0423 21:00:14.822918 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"74754fa1-66c3-4a1c-8d54-6bae8a2b4857", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0423 21:00:14.823173 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0423 21:00:14.823187 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0423 21:00:14.823201 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0423 21:00:14.823207 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0423 21:00:14.823230 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0423 21:00:14.823238 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0423 21:00:14.823843 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-582786636/tls.crt::/tmp/serving-cert-582786636/tls.key" I0423 21:00:14.824284 1 secure_serving.go:213] Serving securely on [::]:8443 I0423 21:00:14.824363 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0423 21:00:14.829125 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0423 21:00:14.829149 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0423 21:00:14.829259 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0423 21:00:14.837137 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0423 21:00:14.837154 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0423 21:00:14.842017 1 secretconfigobserver.go:119] support secret does not exist I0423 21:00:14.849794 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0423 21:00:14.854701 1 secretconfigobserver.go:119] support secret does not exist I0423 21:00:14.859275 1 recorder.go:161] Pruning old reports every 6h55m43s, max age is 288h0m0s I0423 21:00:14.864702 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0423 21:00:14.864706 1 periodic.go:209] Running clusterconfig gatherer I0423 21:00:14.864720 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0423 21:00:14.864743 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0423 21:00:14.864749 1 insightsreport.go:296] Starting report retriever I0423 21:00:14.864753 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0423 21:00:14.864758 1 tasks_processing.go:45] number of workers: 64 I0423 21:00:14.864790 1 tasks_processing.go:69] worker 7 listening for tasks. I0423 21:00:14.864795 1 tasks_processing.go:69] worker 3 listening for tasks. I0423 21:00:14.864801 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 21:00:14.864810 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 21:00:14.864812 1 tasks_processing.go:71] worker 3 working on oauths task. I0423 21:00:14.864814 1 tasks_processing.go:69] worker 29 listening for tasks. I0423 21:00:14.864819 1 tasks_processing.go:71] worker 1 working on active_alerts task. I0423 21:00:14.864817 1 tasks_processing.go:69] worker 2 listening for tasks. I0423 21:00:14.864823 1 tasks_processing.go:71] worker 29 working on sap_datahubs task. I0423 21:00:14.864818 1 tasks_processing.go:69] worker 18 listening for tasks. I0423 21:00:14.864830 1 tasks_processing.go:69] worker 8 listening for tasks. I0423 21:00:14.864832 1 tasks_processing.go:69] worker 5 listening for tasks. I0423 21:00:14.864834 1 tasks_processing.go:69] worker 6 listening for tasks. I0423 21:00:14.864840 1 tasks_processing.go:69] worker 9 listening for tasks. I0423 21:00:14.864845 1 tasks_processing.go:69] worker 4 listening for tasks. I0423 21:00:14.864847 1 tasks_processing.go:69] worker 14 listening for tasks. I0423 21:00:14.864850 1 tasks_processing.go:69] worker 16 listening for tasks. I0423 21:00:14.864851 1 tasks_processing.go:69] worker 15 listening for tasks. I0423 21:00:14.864849 1 tasks_processing.go:69] worker 23 listening for tasks. I0423 21:00:14.864856 1 tasks_processing.go:69] worker 17 listening for tasks. W0423 21:00:14.864852 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:14.864866 1 tasks_processing.go:69] worker 20 listening for tasks. I0423 21:00:14.864878 1 tasks_processing.go:69] worker 21 listening for tasks. I0423 21:00:14.864877 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 42.516µs to process 0 records I0423 21:00:14.864859 1 tasks_processing.go:69] worker 19 listening for tasks. I0423 21:00:14.864863 1 tasks_processing.go:69] worker 47 listening for tasks. I0423 21:00:14.864913 1 tasks_processing.go:69] worker 58 listening for tasks. I0423 21:00:14.864926 1 tasks_processing.go:69] worker 57 listening for tasks. I0423 21:00:14.864938 1 tasks_processing.go:69] worker 63 listening for tasks. I0423 21:00:14.864948 1 tasks_processing.go:71] worker 7 working on aggregated_monitoring_cr_names task. I0423 21:00:14.864953 1 tasks_processing.go:69] worker 13 listening for tasks. I0423 21:00:14.864959 1 tasks_processing.go:69] worker 34 listening for tasks. I0423 21:00:14.864970 1 tasks_processing.go:69] worker 12 listening for tasks. I0423 21:00:14.864975 1 tasks_processing.go:69] worker 31 listening for tasks. I0423 21:00:14.864976 1 tasks_processing.go:69] worker 10 listening for tasks. I0423 21:00:14.864986 1 tasks_processing.go:69] worker 32 listening for tasks. I0423 21:00:14.864987 1 tasks_processing.go:69] worker 33 listening for tasks. I0423 21:00:14.864998 1 tasks_processing.go:69] worker 28 listening for tasks. I0423 21:00:14.865009 1 tasks_processing.go:71] worker 28 working on openshift_logging task. I0423 21:00:14.865076 1 tasks_processing.go:69] worker 25 listening for tasks. I0423 21:00:14.865072 1 tasks_processing.go:69] worker 11 listening for tasks. I0423 21:00:14.864812 1 tasks_processing.go:71] worker 0 working on proxies task. I0423 21:00:14.865097 1 tasks_processing.go:71] worker 11 working on crds task. I0423 21:00:14.865134 1 tasks_processing.go:69] worker 44 listening for tasks. I0423 21:00:14.865143 1 tasks_processing.go:71] worker 44 working on infrastructures task. I0423 21:00:14.865091 1 tasks_processing.go:71] worker 25 working on validating_webhook_configurations task. I0423 21:00:14.865189 1 tasks_processing.go:69] worker 42 listening for tasks. I0423 21:00:14.865198 1 tasks_processing.go:69] worker 39 listening for tasks. I0423 21:00:14.865198 1 tasks_processing.go:71] worker 2 working on image_pruners task. I0423 21:00:14.865205 1 tasks_processing.go:71] worker 8 working on machine_sets task. I0423 21:00:14.865214 1 tasks_processing.go:69] worker 38 listening for tasks. I0423 21:00:14.865223 1 tasks_processing.go:69] worker 61 listening for tasks. I0423 21:00:14.865235 1 tasks_processing.go:69] worker 35 listening for tasks. I0423 21:00:14.865238 1 tasks_processing.go:69] worker 45 listening for tasks. I0423 21:00:14.865239 1 tasks_processing.go:69] worker 62 listening for tasks. I0423 21:00:14.865247 1 tasks_processing.go:69] worker 51 listening for tasks. I0423 21:00:14.865246 1 tasks_processing.go:69] worker 36 listening for tasks. I0423 21:00:14.865257 1 tasks_processing.go:69] worker 27 listening for tasks. I0423 21:00:14.865260 1 tasks_processing.go:69] worker 49 listening for tasks. I0423 21:00:14.865268 1 tasks_processing.go:69] worker 43 listening for tasks. I0423 21:00:14.865259 1 tasks_processing.go:69] worker 37 listening for tasks. I0423 21:00:14.865253 1 tasks_processing.go:69] worker 48 listening for tasks. I0423 21:00:14.865268 1 tasks_processing.go:69] worker 50 listening for tasks. I0423 21:00:14.865272 1 tasks_processing.go:69] worker 46 listening for tasks. I0423 21:00:14.865279 1 tasks_processing.go:71] worker 48 working on container_images task. I0423 21:00:14.865282 1 tasks_processing.go:69] worker 54 listening for tasks. I0423 21:00:14.865275 1 tasks_processing.go:69] worker 53 listening for tasks. I0423 21:00:14.865287 1 tasks_processing.go:69] worker 55 listening for tasks. I0423 21:00:14.865287 1 tasks_processing.go:71] worker 38 working on monitoring_persistent_volumes task. I0423 21:00:14.865286 1 tasks_processing.go:69] worker 60 listening for tasks. I0423 21:00:14.865295 1 tasks_processing.go:69] worker 40 listening for tasks. I0423 21:00:14.865296 1 tasks_processing.go:69] worker 59 listening for tasks. I0423 21:00:14.865294 1 tasks_processing.go:71] worker 35 working on lokistack task. I0423 21:00:14.865298 1 tasks_processing.go:69] worker 22 listening for tasks. I0423 21:00:14.865284 1 tasks_processing.go:71] worker 39 working on jaegers task. I0423 21:00:14.865314 1 tasks_processing.go:71] worker 6 working on tsdb_status task. I0423 21:00:14.865314 1 tasks_processing.go:71] worker 36 working on openstack_dataplanedeployments task. I0423 21:00:14.865320 1 tasks_processing.go:71] worker 15 working on schedulers task. I0423 21:00:14.865281 1 tasks_processing.go:71] worker 42 working on qemu_kubevirt_launcher_logs task. I0423 21:00:14.865278 1 tasks_processing.go:71] worker 37 working on config_maps task. I0423 21:00:14.865384 1 tasks_processing.go:71] worker 4 working on ingress task. I0423 21:00:14.865787 1 tasks_processing.go:71] worker 22 working on container_runtime_configs task. I0423 21:00:14.865846 1 tasks_processing.go:69] worker 26 listening for tasks. I0423 21:00:14.865966 1 tasks_processing.go:69] worker 24 listening for tasks. I0423 21:00:14.866068 1 tasks_processing.go:71] worker 26 working on certificate_signing_requests task. I0423 21:00:14.866151 1 tasks_processing.go:71] worker 14 working on image_registries task. I0423 21:00:14.866186 1 tasks_processing.go:71] worker 46 working on nodes task. I0423 21:00:14.865386 1 tasks_processing.go:71] worker 9 working on image task. I0423 21:00:14.866411 1 tasks_processing.go:71] worker 23 working on node_logs task. I0423 21:00:14.866443 1 tasks_processing.go:71] worker 58 working on machine_healthchecks task. I0423 21:00:14.866520 1 tasks_processing.go:71] worker 53 working on operators task. I0423 21:00:14.865297 1 tasks_processing.go:71] worker 45 working on ingress_certificates task. I0423 21:00:14.866769 1 tasks_processing.go:71] worker 31 working on olm_operators task. I0423 21:00:14.866806 1 tasks_processing.go:71] worker 16 working on machines task. I0423 21:00:14.867169 1 tasks_processing.go:71] worker 19 working on overlapping_namespace_uids task. I0423 21:00:14.865301 1 tasks_processing.go:71] worker 62 working on nodenetworkconfigurationpolicies task. I0423 21:00:14.867420 1 tasks_processing.go:71] worker 40 working on mutating_webhook_configurations task. I0423 21:00:14.867507 1 tasks_processing.go:71] worker 50 working on storage_cluster task. I0423 21:00:14.865304 1 tasks_processing.go:71] worker 27 working on dvo_metrics task. I0423 21:00:14.867749 1 tasks_processing.go:71] worker 32 working on metrics task. I0423 21:00:14.867796 1 tasks_processing.go:71] worker 34 working on sap_config task. I0423 21:00:14.867996 1 tasks_processing.go:71] worker 1 working on clusterroles task. I0423 21:00:14.868012 1 tasks_processing.go:71] worker 17 working on openstack_version task. I0423 21:00:14.868024 1 tasks_processing.go:71] worker 47 working on service_accounts task. I0423 21:00:14.867756 1 tasks_processing.go:71] worker 33 working on machine_autoscalers task. I0423 21:00:14.866415 1 tasks_processing.go:71] worker 54 working on machine_config_pools task. I0423 21:00:14.865819 1 tasks_processing.go:69] worker 56 listening for tasks. I0423 21:00:14.866489 1 tasks_processing.go:71] worker 13 working on cluster_apiserver task. I0423 21:00:14.867797 1 tasks_processing.go:71] worker 57 working on storage_classes task. I0423 21:00:14.865306 1 tasks_processing.go:71] worker 21 working on authentication task. I0423 21:00:14.865310 1 tasks_processing.go:71] worker 5 working on openstack_controlplanes task. I0423 21:00:14.865308 1 tasks_processing.go:71] worker 51 working on support_secret task. I0423 21:00:14.865308 1 tasks_processing.go:71] worker 43 working on version task. I0423 21:00:14.865290 1 tasks_processing.go:71] worker 61 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0423 21:00:14.865709 1 tasks_processing.go:69] worker 41 listening for tasks. I0423 21:00:14.865837 1 tasks_processing.go:69] worker 30 listening for tasks. I0423 21:00:14.867300 1 tasks_processing.go:71] worker 24 working on cost_management_metrics_configs task. I0423 21:00:14.865202 1 tasks_processing.go:71] worker 18 working on pod_network_connectivity_checks task. W0423 21:00:14.867767 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:14.865305 1 tasks_processing.go:71] worker 49 working on openshift_machine_api_events task. I0423 21:00:14.865281 1 tasks_processing.go:69] worker 52 listening for tasks. I0423 21:00:14.867897 1 tasks_processing.go:71] worker 55 working on networks task. I0423 21:00:14.867903 1 tasks_processing.go:71] worker 60 working on ceph_cluster task. I0423 21:00:14.867918 1 tasks_processing.go:71] worker 63 working on silenced_alerts task. W0423 21:00:14.868603 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:14.867926 1 tasks_processing.go:71] worker 10 working on operators_pods_and_events task. I0423 21:00:14.867928 1 tasks_processing.go:71] worker 12 working on install_plans task. I0423 21:00:14.867943 1 tasks_processing.go:71] worker 59 working on pdbs task. W0423 21:00:14.867967 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:14.868066 1 tasks_processing.go:71] worker 20 working on openstack_dataplanenodesets task. I0423 21:00:14.868441 1 tasks_processing.go:71] worker 6 working on nodenetworkstates task. I0423 21:00:14.869255 1 tasks_processing.go:71] worker 56 working on sap_pods task. I0423 21:00:14.869713 1 tasks_processing.go:71] worker 41 working on feature_gates task. I0423 21:00:14.869741 1 tasks_processing.go:71] worker 30 working on machine_configs task. I0423 21:00:14.869828 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 3.115745ms to process 0 records I0423 21:00:14.869846 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 28.951µs to process 0 records I0423 21:00:14.869852 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 1.020271ms to process 0 records I0423 21:00:14.869861 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 4.674285ms to process 0 records I0423 21:00:14.869869 1 tasks_processing.go:74] worker 29 stopped. I0423 21:00:14.869871 1 tasks_processing.go:74] worker 63 stopped. I0423 21:00:14.869876 1 tasks_processing.go:74] worker 32 stopped. I0423 21:00:14.869883 1 tasks_processing.go:74] worker 52 stopped. I0423 21:00:14.870333 1 tasks_processing.go:74] worker 28 stopped. I0423 21:00:14.870345 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 5.310601ms to process 0 records I0423 21:00:14.871482 1 tasks_processing.go:74] worker 8 stopped. I0423 21:00:14.871494 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 6.267624ms to process 0 records I0423 21:00:14.871549 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0423 21:00:14.871566 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0423 21:00:14.871571 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0423 21:00:14.871574 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0423 21:00:14.871587 1 controller.go:489] The operator is still being initialized I0423 21:00:14.871594 1 controller.go:512] The operator is healthy I0423 21:00:14.871625 1 tasks_processing.go:74] worker 22 stopped. I0423 21:00:14.871636 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 5.769161ms to process 0 records I0423 21:00:14.871894 1 tasks_processing.go:74] worker 36 stopped. I0423 21:00:14.871906 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 6.568643ms to process 0 records I0423 21:00:14.871927 1 tasks_processing.go:74] worker 35 stopped. I0423 21:00:14.871936 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 6.616776ms to process 0 records I0423 21:00:14.885637 1 tasks_processing.go:74] worker 58 stopped. E0423 21:00:14.885649 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0423 21:00:14.885657 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 19.133111ms to process 0 records I0423 21:00:14.885753 1 tasks_processing.go:74] worker 44 stopped. I0423 21:00:14.886287 1 recorder.go:75] Recording config/infrastructure with fingerprint=261cbe73939524ea6642fe0778eef685515377cab250ee9a407f6f527d619322 I0423 21:00:14.886303 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 20.60409ms to process 1 records I0423 21:00:14.886317 1 tasks_processing.go:74] worker 17 stopped. I0423 21:00:14.886332 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 18.293003ms to process 0 records I0423 21:00:14.886517 1 tasks_processing.go:74] worker 9 stopped. I0423 21:00:14.886625 1 recorder.go:75] Recording config/image with fingerprint=f36959bb825c8ea99a38e0c13e20e3636a3a9660dd101577315128bbe49e6674 I0423 21:00:14.886645 1 gather.go:177] gatherer "clusterconfig" function "image" took 20.102286ms to process 1 records I0423 21:00:14.886739 1 tasks_processing.go:74] worker 2 stopped. I0423 21:00:14.886874 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=365c41ec86723f23671378c27c2d003b0859e5cfefb3c447d1fe936a0ffed955 I0423 21:00:14.886887 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 21.309776ms to process 1 records I0423 21:00:14.886961 1 tasks_processing.go:74] worker 3 stopped. I0423 21:00:14.887110 1 recorder.go:75] Recording config/oauth with fingerprint=e01aa4c8cd725b2ddd5e9aeee707da45ee3475b9ac1843e59227233afef84fe4 I0423 21:00:14.887122 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 21.700726ms to process 1 records I0423 21:00:14.887248 1 tasks_processing.go:74] worker 4 stopped. I0423 21:00:14.887251 1 recorder.go:75] Recording config/ingress with fingerprint=6d05433f68ff1d6481931509d13e46474b2b3fa557e74bc6eb305ed0756f1c22 I0423 21:00:14.887261 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 20.816639ms to process 1 records I0423 21:00:14.887266 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 18.095906ms to process 0 records I0423 21:00:14.887270 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 18.31277ms to process 0 records I0423 21:00:14.887277 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 19.128584ms to process 0 records I0423 21:00:14.887280 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 17.827376ms to process 0 records I0423 21:00:14.887283 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 18.418707ms to process 0 records I0423 21:00:14.887288 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 18.301963ms to process 0 records E0423 21:00:14.887292 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0423 21:00:14.887299 1 gather.go:177] gatherer "clusterconfig" function "machines" took 19.938184ms to process 0 records I0423 21:00:14.887316 1 tasks_processing.go:74] worker 20 stopped. I0423 21:00:14.887328 1 tasks_processing.go:74] worker 49 stopped. I0423 21:00:14.887332 1 tasks_processing.go:74] worker 5 stopped. I0423 21:00:14.887339 1 tasks_processing.go:74] worker 60 stopped. I0423 21:00:14.887342 1 tasks_processing.go:74] worker 24 stopped. I0423 21:00:14.887344 1 tasks_processing.go:74] worker 50 stopped. I0423 21:00:14.887346 1 tasks_processing.go:74] worker 16 stopped. I0423 21:00:14.887352 1 recorder.go:75] Recording config/proxy with fingerprint=f05c3dbe2d5f96c32f3908814e3fb94c1fb57c2a0889b9029c70710401d06812 I0423 21:00:14.887357 1 tasks_processing.go:74] worker 0 stopped. I0423 21:00:14.887363 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 21.756843ms to process 1 records I0423 21:00:14.887375 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 18.002935ms to process 0 records I0423 21:00:14.887378 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 18.883513ms to process 0 records I0423 21:00:14.887385 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 19.528525ms to process 0 records I0423 21:00:14.887407 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 17.61397ms to process 0 records I0423 21:00:14.887410 1 tasks_processing.go:74] worker 34 stopped. I0423 21:00:14.887412 1 tasks_processing.go:74] worker 6 stopped. I0423 21:00:14.887412 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 18.831134ms to process 0 records I0423 21:00:14.887421 1 tasks_processing.go:74] worker 56 stopped. I0423 21:00:14.887416 1 tasks_processing.go:74] worker 33 stopped. I0423 21:00:14.887420 1 tasks_processing.go:74] worker 62 stopped. I0423 21:00:14.888413 1 tasks_processing.go:74] worker 25 stopped. I0423 21:00:14.888697 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=1f7c17c58554d8fb9250af1ed443f90d4d347ccc471c8cecb80e41c1f4fc94ef I0423 21:00:14.888888 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=bf42999d10a12c155635ea11d19da33fdaded814a6d9b6e53ec4acefff1d3ae9 I0423 21:00:14.888899 1 gather_logs.go:145] no pods in namespace were found I0423 21:00:14.888927 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=c4ea76b8a1009395c95da4781590d30b8bf57f9a2052f65d3575973ab3208155 I0423 21:00:14.889227 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=fe8224d582b8bb974dbdc21a7b0edbe5c627072100a31c44a58cc7ac8447b13b I0423 21:00:14.889292 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=0b2b019c89b4032ff3e7caee91b6df8e8b7b878bb6177b41429133b1ff4ba966 I0423 21:00:14.889347 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=93d093978951276e32838eba32a4ff82eb89acf0e29b85ca7a22ea49f0a7d576 I0423 21:00:14.889476 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=0f9ffd51d0cd618b08e69056e8e4cf7fb70e5e22b982bb9dbc707c0fdd565fc0 I0423 21:00:14.889552 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=af238762571d58dbf9649e0116bce4eb9f4d7e4f417a4de73bda357465b91c83 I0423 21:00:14.889919 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=603ac6597d619068121e1af47a576120dc3c1a09cbf9897890cf9b8529533768 I0423 21:00:14.889981 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=d3d69a844af234d62d39869c0023b499433900b4e801fa515ba45c8e7aae8e74 I0423 21:00:14.890054 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=e09aca962e689867e3e679642707d45c1c76c42ccce2ed0c5926a2b615139d5d I0423 21:00:14.890084 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 23.230413ms to process 11 records E0423 21:00:14.890110 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0423 21:00:14.890119 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 20.460815ms to process 0 records I0423 21:00:14.890128 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 23.56345ms to process 0 records I0423 21:00:14.890138 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 23.550192ms to process 0 records I0423 21:00:14.890277 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=aa12fedfeb4e3c01b1769366d0fce85cb38765ba1d53ba870e1295449518d3e7 I0423 21:00:14.890337 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=502d25a4ed416d35119068e131a8b81ec68405f9b07fc2d8e627faf275fb2f93 I0423 21:00:14.890284 1 tasks_processing.go:74] worker 39 stopped. I0423 21:00:14.890291 1 tasks_processing.go:74] worker 42 stopped. I0423 21:00:14.890351 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 21.72584ms to process 2 records I0423 21:00:14.890363 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 23.621242ms to process 0 records I0423 21:00:14.890295 1 tasks_processing.go:74] worker 18 stopped. I0423 21:00:14.890303 1 tasks_processing.go:74] worker 57 stopped. I0423 21:00:14.890414 1 tasks_processing.go:74] worker 23 stopped. I0423 21:00:14.890478 1 tasks_processing.go:74] worker 59 stopped. I0423 21:00:14.890613 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=6c565ef81127e20068b93a19d62f289ba902cf184b4ab85979cf4eb814f56837 I0423 21:00:14.890686 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=3bb81195860cd29e01481ec19631184d997a9fdc2192eeea83cc3748371a0716 I0423 21:00:14.890738 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=53f0b59d02002fe522f494b00f328c980a7d9b3b73cc653decaff2fef3e99398 I0423 21:00:14.890750 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 21.268384ms to process 3 records I0423 21:00:14.890763 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 22.142712ms to process 0 records I0423 21:00:14.890770 1 tasks_processing.go:74] worker 54 stopped. I0423 21:00:14.891289 1 tasks_processing.go:74] worker 14 stopped. I0423 21:00:14.891637 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=ced11e937fc8c67dd1bcbeb352df26a786dfd22bdc262fb54178d2d20946bb20 I0423 21:00:14.891653 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 25.117846ms to process 1 records I0423 21:00:14.891731 1 tasks_processing.go:74] worker 46 stopped. I0423 21:00:14.891880 1 recorder.go:75] Recording config/node/ip-10-0-0-174.ec2.internal with fingerprint=acadf8f0e27cfb2094567aa6a30b282eced36ed32ef70f7066f28980a3c899bf I0423 21:00:14.891930 1 recorder.go:75] Recording config/node/ip-10-0-1-80.ec2.internal with fingerprint=289f6601148623e2b3e83adf365427b8de3415bdc70c1f56623a4cd0ca9d8215 I0423 21:00:14.891980 1 recorder.go:75] Recording config/node/ip-10-0-2-179.ec2.internal with fingerprint=204e163b005236d641d8bdd32c5c1f7751b4863c566ee485aba77cf1c6b8d49e I0423 21:00:14.891986 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 25.438523ms to process 3 records I0423 21:00:14.892071 1 tasks_processing.go:74] worker 21 stopped. I0423 21:00:14.892146 1 recorder.go:75] Recording config/authentication with fingerprint=adfc60caa207915e9db4713c5f4c34af3c2c4a121f96276429ed19f07d09b272 I0423 21:00:14.892155 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 23.37834ms to process 1 records I0423 21:00:14.892160 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 26.621196ms to process 0 records I0423 21:00:14.892166 1 tasks_processing.go:74] worker 38 stopped. I0423 21:00:14.892670 1 tasks_processing.go:74] worker 26 stopped. I0423 21:00:14.892682 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 26.565082ms to process 0 records I0423 21:00:14.895241 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0423 21:00:14.895284 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0423 21:00:14.895319 1 tasks_processing.go:74] worker 15 stopped. W0423 21:00:14.895406 1 operator.go:288] started I0423 21:00:14.895421 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=cd864b3b787fa0c263a4fd0b1e8039d5d380c9519154c9b056e2cab8624e69e5 I0423 21:00:14.895421 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0423 21:00:14.895431 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 29.989235ms to process 1 records I0423 21:00:14.895785 1 tasks_processing.go:74] worker 19 stopped. I0423 21:00:14.895807 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0423 21:00:14.895821 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 28.526085ms to process 1 records I0423 21:00:14.895984 1 tasks_processing.go:74] worker 55 stopped. I0423 21:00:14.896161 1 recorder.go:75] Recording config/network with fingerprint=887b832d72efabb92c1d5cfb1a058381128eb49c9b4867eb9808e8c77ae66fd0 I0423 21:00:14.896176 1 gather.go:177] gatherer "clusterconfig" function "networks" took 27.507243ms to process 1 records I0423 21:00:14.902886 1 tasks_processing.go:74] worker 40 stopped. I0423 21:00:14.903022 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=b8c400fb9b2c280c67bf21ed5632fc1d1565f8f2dbe7070b16a247d4fa9daa73 I0423 21:00:14.903102 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=0c98a5cb49de95bbaeaa7520647d9a625990a0124f4a5f31b03b08c6b16090f0 I0423 21:00:14.903149 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=dc41191f586985d63660ecd46f413e0107a2eb4f38d433f426cfa64546cf60cb I0423 21:00:14.903162 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 35.444653ms to process 3 records I0423 21:00:14.903383 1 tasks_processing.go:74] worker 51 stopped. E0423 21:00:14.903411 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0423 21:00:14.903423 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 35.0719ms to process 0 records I0423 21:00:14.903697 1 tasks_processing.go:74] worker 13 stopped. I0423 21:00:14.903798 1 recorder.go:75] Recording config/apiserver with fingerprint=1712650b84b3c02d1eb92d9759ef368e8f93229233ed564493670558c17ed79c I0423 21:00:14.903809 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 35.441334ms to process 1 records I0423 21:00:14.905132 1 tasks_processing.go:74] worker 11 stopped. I0423 21:00:14.906686 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=b5299c946d54f2a2e8eaeca1bc308b13f5c21351fc3fed2bb4b8e47722ae672c I0423 21:00:14.906887 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0423 21:00:14.906903 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0423 21:00:14.906906 1 controller.go:212] Source scaController *sca.Controller is not ready I0423 21:00:14.906909 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0423 21:00:14.906912 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0423 21:00:14.906926 1 controller.go:489] The operator is still being initialized I0423 21:00:14.906931 1 controller.go:512] The operator is healthy I0423 21:00:14.906945 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=33d7c8760ff7e04b03dbe34b22df9905594076ea731be018085078c27d4390de I0423 21:00:14.906996 1 gather.go:177] gatherer "clusterconfig" function "crds" took 40.021992ms to process 2 records I0423 21:00:14.907077 1 recorder.go:75] Recording config/olm_operators with fingerprint=adcd3f65705c29a217d14d01f0eb1e6d9c74cdc9c795e95db503bb5d8c98933d I0423 21:00:14.907087 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 38.955988ms to process 1 records I0423 21:00:14.907129 1 tasks_processing.go:74] worker 31 stopped. I0423 21:00:14.907204 1 tasks_processing.go:74] worker 1 stopped. I0423 21:00:14.907307 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=f3dca3525feba770de07c4dfa6dd695ec17f35a609c601e774923b3b87cc18df I0423 21:00:14.907477 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=f223415ef35dba6ff19444cc1e2f55c5b236fa8064ca709c226bbf1522f16c58 I0423 21:00:14.907493 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 38.734836ms to process 2 records I0423 21:00:14.907580 1 tasks_processing.go:74] worker 41 stopped. I0423 21:00:14.907683 1 recorder.go:75] Recording config/featuregate with fingerprint=76d2637cc5c795277554ca141013d68c7738c98df8351d192e7b4dbfa8fee3c7 I0423 21:00:14.907697 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 37.403177ms to process 1 records I0423 21:00:14.907708 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 42.552759ms to process 0 records I0423 21:00:14.907718 1 tasks_processing.go:74] worker 7 stopped. I0423 21:00:14.910513 1 tasks_processing.go:74] worker 48 stopped. I0423 21:00:14.910642 1 prometheus_rules.go:88] Prometheus rules successfully created W0423 21:00:14.911246 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 21:00:14.911693 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-575cd97545-94cdq with fingerprint=9eb8936b3c63a4f0bc8f10f774d7819c4b60aa53d1337a8726062242a9ac395f I0423 21:00:14.911795 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-0-174.ec2.internal with fingerprint=4085cb974479f0d2492242ea4fc66ab7d99f930c76bff051a9e8924a316e8086 I0423 21:00:14.911860 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-1-80.ec2.internal with fingerprint=26d7524f3540735dcc78a06275921f01c5256a4586e07bd5db4f9c6cfdf2c5f9 I0423 21:00:14.911926 1 recorder.go:75] Recording config/pod/openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-2-179.ec2.internal with fingerprint=86a80cf45c1bb47fda322b0a87a0fd01b3b79af893da0ea689a488039db89a95 I0423 21:00:14.912159 1 recorder.go:75] Recording config/pod/openshift-multus/multus-vvf7k with fingerprint=9d436e1bdeb9d10809bbf19db04d83f027265d5cd4f54807e858de28777a1442 I0423 21:00:14.912363 1 recorder.go:75] Recording config/running_containers with fingerprint=127e2ea3fa3c1a4519b741c3dbf507ac435af55c58f353c957075f2f1844759a I0423 21:00:14.912455 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 45.219986ms to process 6 records E0423 21:00:14.914564 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%2767cd5ead-9715-4ee2-b684-66811226e9c6%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.15:57908->172.30.0.10:53: read: connection refused I0423 21:00:14.914577 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%2767cd5ead-9715-4ee2-b684-66811226e9c6%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.15:57908->172.30.0.10:53: read: connection refused I0423 21:00:14.923111 1 tasks_processing.go:74] worker 37 stopped. E0423 21:00:14.923126 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0423 21:00:14.923131 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0423 21:00:14.923134 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0423 21:00:14.923142 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0423 21:00:14.923166 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0423 21:00:14.923170 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0423 21:00:14.923174 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0423 21:00:14.923178 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0423 21:00:14.923214 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0423 21:00:14.923221 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0423 21:00:14.923226 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 57.723909ms to process 7 records I0423 21:00:14.923244 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0423 21:00:14.923257 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0423 21:00:14.923294 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0423 21:00:14.923367 1 tasks_processing.go:74] worker 43 stopped. I0423 21:00:14.923720 1 recorder.go:75] Recording config/version with fingerprint=6320ee8a0dfc7fd24670cb65269074666490f848ba5eb696c6216159df3389ce I0423 21:00:14.923741 1 recorder.go:75] Recording config/id with fingerprint=322a587684322e3367b19ebc11238c39b48ddb93c6b357b68b1c4b4f0ce2aa0e I0423 21:00:14.923749 1 gather.go:177] gatherer "clusterconfig" function "version" took 55.047819ms to process 2 records I0423 21:00:14.929551 1 base_controller.go:82] Caches are synced for ConfigController I0423 21:00:14.929563 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0423 21:00:14.934870 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 21:00:14.943163 1 tasks_processing.go:74] worker 61 stopped. I0423 21:00:14.943175 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 74.838642ms to process 0 records I0423 21:00:14.994891 1 tasks_processing.go:74] worker 30 stopped. I0423 21:00:14.994920 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0423 21:00:14.994929 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 125.13436ms to process 1 records I0423 21:00:14.995869 1 base_controller.go:82] Caches are synced for LoggingSyncer I0423 21:00:14.995881 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0423 21:00:15.000451 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0423 21:00:15.003558 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.15:50883->172.30.0.10:53: read: connection refused I0423 21:00:15.003571 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.15:50883->172.30.0.10:53: read: connection refused I0423 21:00:15.011431 1 tasks_processing.go:74] worker 45 stopped. E0423 21:00:15.011445 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0423 21:00:15.011450 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ps35nrp0pk3bg30enkfd1917alf83qv-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ps35nrp0pk3bg30enkfd1917alf83qv-primary-cert-bundle-secret" not found I0423 21:00:15.011497 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=b5d460412fda13dc6f496730e0da71d609cd2e1bb4dd7fe93674c4bf342d743a I0423 21:00:15.011521 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 144.725411ms to process 1 records W0423 21:00:15.910642 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 21:00:16.115450 1 gather_cluster_operator_pods_and_events.go:121] Found 20 pods with 24 containers I0423 21:00:16.115462 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1048576 bytes I0423 21:00:16.116057 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-lnzk6 pod in namespace openshift-dns (previous: false). I0423 21:00:16.355054 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-lnzk6 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-lnzk6\" is waiting to start: ContainerCreating" I0423 21:00:16.355067 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-lnzk6\" is waiting to start: ContainerCreating" I0423 21:00:16.355082 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-lnzk6 pod in namespace openshift-dns (previous: false). I0423 21:00:16.519370 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-lnzk6 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-lnzk6\" is waiting to start: ContainerCreating" I0423 21:00:16.519385 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-lnzk6\" is waiting to start: ContainerCreating" I0423 21:00:16.519427 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-pkpc8 pod in namespace openshift-dns (previous: false). I0423 21:00:16.543246 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0423 21:00:16.749411 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-pkpc8 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-pkpc8\" is waiting to start: ContainerCreating" I0423 21:00:16.749424 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-pkpc8\" is waiting to start: ContainerCreating" I0423 21:00:16.749432 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-pkpc8 pod in namespace openshift-dns (previous: false). W0423 21:00:16.910294 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 21:00:16.919522 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-pkpc8 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-pkpc8\" is waiting to start: ContainerCreating" I0423 21:00:16.919534 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-pkpc8\" is waiting to start: ContainerCreating" I0423 21:00:16.919560 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-vhx5f pod in namespace openshift-dns (previous: false). I0423 21:00:17.131064 1 tasks_processing.go:74] worker 53 stopped. I0423 21:00:17.131108 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=79c82215145fa9a731f893b29541ed85ae6edbf3df2eeda1d170b22fd7f53c7c I0423 21:00:17.131141 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=d29e9bb40154e4650537b168e8295e45d39a9f5683ea36fe747281d579980ee3 I0423 21:00:17.131166 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0423 21:00:17.131193 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=09a4c4152974d08ae008a132103ea695c393c6eb1e656f95c2ed2ec1ca49a372 I0423 21:00:17.131209 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0423 21:00:17.131229 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=6d57be92250aa5b3db6cb2677c7f7aff0096bfb8171401db1b5e248b5a1e95a8 I0423 21:00:17.131258 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=d4bb773e3f13f86b0bc760384b473c507a3b5fb6bbf2c317916b0884b53d9970 I0423 21:00:17.131281 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=3c26264bba33910d98f57785a9364cd33ddaa124464f7a9b129b41c1a90f1f94 I0423 21:00:17.131311 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=a51fedbdbf116ffc2ce0b01f7534a5bdfde4e2a041c7022c282422efb8169bad I0423 21:00:17.131320 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=e5ff11d57817f84a678f6fa9565af55bd1120227c16a21933637ab62675a6d70 I0423 21:00:17.131337 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=7cd5bbbbe78c64a15e768b78ec9106f3ea910177c8fe28541e56d5d4f3e77bd3 I0423 21:00:17.131347 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0423 21:00:17.131366 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=317cbd307aed7d20d288f5b63765e5a2a475b314404bdb1699ac14172187df19 I0423 21:00:17.131375 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0423 21:00:17.131406 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=c0f48cda579761cc0ad4d332da1a3cf1373dfc111b4705f3334b97c68796d817 I0423 21:00:17.131416 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0423 21:00:17.131434 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=c48132fdd0e3f88185eeedf79ff26cd9aa4732bdca6ec5e708dcc62ad3de9f39 I0423 21:00:17.131444 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0423 21:00:17.131458 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=fe5d4737a4fa09943bf2d8c5b7155dfcc6c1305aebbcb5bc8afbf908df5cbe61 I0423 21:00:17.131580 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=ef7ab03c707819655b7ded5fa5998426e6f3fc17e0c015173a6875439febd2d0 I0423 21:00:17.131590 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0423 21:00:17.131596 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0423 21:00:17.131618 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0423 21:00:17.131637 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=36e4e6aed98f5290381a0b493c32fae8be9fb4c3dec46f5f69aea3c02ac7d243 I0423 21:00:17.131659 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=b6bbf78d08c719a2572c696ab2abbef9a2f5b0dfe2efa5569646836d0118cb93 I0423 21:00:17.131668 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0423 21:00:17.131682 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=edc58b7fe169543f15349faf3563a2f45df212fa4c73f4158cfb2f33ffc0931c I0423 21:00:17.131693 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0423 21:00:17.131705 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=5159c8642e5bc93184b441fef97910f2cdcc8364c9c3ef125db5242ea4c3c01f I0423 21:00:17.131719 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=c13275093a48e6cb3ec7b107f2ce128fec3eea0ae2e05389ac30039f8b466242 I0423 21:00:17.131735 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=e4a59e4c1d0a82335d1aa482fd263acbef4b257634b80495e9538e6a9055a48a I0423 21:00:17.131749 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=408c791db32027c66220cc206df30430dff5742ebc5a6aa448bba8a312c5916a I0423 21:00:17.131767 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=ee7a0929804920169b503569310d36463f8159f1b98d263a076f4738d26e8ce2 I0423 21:00:17.131776 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0423 21:00:17.131801 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=aabbe73285f2e2877973a515ceb669de658e5f3c780933276dc62148cf813170 I0423 21:00:17.131818 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0423 21:00:17.131826 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0423 21:00:17.131832 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.264483639s to process 37 records I0423 21:00:17.143661 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-vhx5f pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-vhx5f\" is waiting to start: ContainerCreating" I0423 21:00:17.143673 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-vhx5f\" is waiting to start: ContainerCreating" I0423 21:00:17.143680 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-vhx5f pod in namespace openshift-dns (previous: false). I0423 21:00:17.319799 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-vhx5f pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-vhx5f\" is waiting to start: ContainerCreating" I0423 21:00:17.319811 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-vhx5f\" is waiting to start: ContainerCreating" I0423 21:00:17.319820 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-9msbw pod in namespace openshift-dns (previous: false). I0423 21:00:17.519868 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:17.519881 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-k4clz pod in namespace openshift-dns (previous: false). I0423 21:00:17.719789 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:17.719801 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-xrgvj pod in namespace openshift-dns (previous: false). W0423 21:00:17.910135 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 21:00:17.919970 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:17.920012 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-765fb64d45-km6pw pod in namespace openshift-image-registry (previous: false). I0423 21:00:18.120190 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-765fb64d45-km6pw pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-765fb64d45-km6pw\" is waiting to start: ContainerCreating" I0423 21:00:18.120202 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-765fb64d45-km6pw\" is waiting to start: ContainerCreating" I0423 21:00:18.120230 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-765fb64d45-rj9pb pod in namespace openshift-image-registry (previous: false). I0423 21:00:18.320018 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-765fb64d45-rj9pb pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-765fb64d45-rj9pb\" is waiting to start: ContainerCreating" I0423 21:00:18.320030 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-765fb64d45-rj9pb\" is waiting to start: ContainerCreating" I0423 21:00:18.320064 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7986cd97f8-t2ngh pod in namespace openshift-image-registry (previous: false). I0423 21:00:18.519489 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7986cd97f8-t2ngh pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7986cd97f8-t2ngh\" is waiting to start: ContainerCreating" I0423 21:00:18.519500 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7986cd97f8-t2ngh\" is waiting to start: ContainerCreating" I0423 21:00:18.519508 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-jdzfm pod in namespace openshift-image-registry (previous: false). I0423 21:00:18.719645 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:18.719658 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-skc98 pod in namespace openshift-image-registry (previous: false). W0423 21:00:18.910973 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0423 21:00:18.919662 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:18.919675 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-xt6kr pod in namespace openshift-image-registry (previous: false). I0423 21:00:19.120257 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0423 21:00:19.120269 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6bdcb487b7-n5mc2 pod in namespace openshift-ingress (previous: false). I0423 21:00:19.320809 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6bdcb487b7-n5mc2 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6bdcb487b7-n5mc2\" is waiting to start: ContainerCreating" I0423 21:00:19.320822 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6bdcb487b7-n5mc2\" is waiting to start: ContainerCreating" I0423 21:00:19.320831 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7567b576b5-cgbx7 pod in namespace openshift-ingress (previous: false). I0423 21:00:19.520535 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7567b576b5-cgbx7 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7567b576b5-cgbx7\" is waiting to start: ContainerCreating" I0423 21:00:19.520550 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7567b576b5-cgbx7\" is waiting to start: ContainerCreating" I0423 21:00:19.520562 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7567b576b5-kvzp2 pod in namespace openshift-ingress (previous: false). I0423 21:00:19.719500 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7567b576b5-kvzp2 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7567b576b5-kvzp2\" is waiting to start: ContainerCreating" I0423 21:00:19.719512 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7567b576b5-kvzp2\" is waiting to start: ContainerCreating" I0423 21:00:19.719536 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-cj5gl pod in namespace openshift-ingress-canary (previous: false). W0423 21:00:19.910976 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0423 21:00:19.910997 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0423 21:00:19.911011 1 tasks_processing.go:74] worker 27 stopped. E0423 21:00:19.911020 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0423 21:00:19.911030 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0423 21:00:19.911045 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0423 21:00:19.911056 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.043414165s to process 1 records I0423 21:00:19.921201 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-cj5gl pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-cj5gl\" is waiting to start: ContainerCreating" I0423 21:00:19.921214 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-cj5gl\" is waiting to start: ContainerCreating" I0423 21:00:19.921236 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-v74lx pod in namespace openshift-ingress-canary (previous: false). I0423 21:00:20.122528 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-v74lx pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-v74lx\" is waiting to start: ContainerCreating" I0423 21:00:20.122540 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-v74lx\" is waiting to start: ContainerCreating" I0423 21:00:20.122560 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-xpqsg pod in namespace openshift-ingress-canary (previous: false). I0423 21:00:20.320744 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-xpqsg pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-xpqsg\" is waiting to start: ContainerCreating" I0423 21:00:20.320757 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-xpqsg\" is waiting to start: ContainerCreating" I0423 21:00:20.320768 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for migrator container migrator-7d5f866c57-llxmz pod in namespace openshift-kube-storage-version-migrator (previous: false). I0423 21:00:20.524512 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for graceful-termination container migrator-7d5f866c57-llxmz pod in namespace openshift-kube-storage-version-migrator (previous: false). I0423 21:00:20.722012 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-74848b4cb9-h6sww pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0423 21:00:20.930185 1 tasks_processing.go:74] worker 10 stopped. I0423 21:00:20.930279 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=ee0477ab03ccaf5ba82bc692ce68ee7c9f148fb6d53bd7f136ec66b523f13b1b I0423 21:00:20.930336 1 recorder.go:75] Recording events/openshift-dns with fingerprint=96e0768ad3676a33216d10fe8f2009156623515b7eca4cfca1ba1a38c294831e I0423 21:00:20.930439 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=c38cd05de6e9c269c4a9a8119e1aed138f79eb023e26f6c053f9df6292298756 I0423 21:00:20.930468 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=e5801fbadb1d638b2844ee94aaeff372b74be905d608f9d5ef09d169149c8680 I0423 21:00:20.930512 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=935d072ae76258135e3eae1080274001f42e32aa94a2c58e9d397deb3d29cb3e I0423 21:00:20.930529 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=33f08eeb4a18bc7bb357798b66c5af73f3a0c0a9b23504b8760c3b4a70d54213 I0423 21:00:20.930543 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=d92c65b48c8387c4323b6b33cde65cc19951c4e6b1ca6bdba79d4c420a9511e9 I0423 21:00:20.930591 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=971e12350d4638490c4391a2a350daea21ebe4ee76c5063ee5b151abc944a998 I0423 21:00:20.930717 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-lnzk6 with fingerprint=49e404696866421a5e5633582aee590e4bced77d190941f0cbc0982f83b56f1b I0423 21:00:20.930797 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-pkpc8 with fingerprint=ae329882acd2dd3db2d11ec60edc2418ed7a717bc6934dbb5d9a79603fc06048 I0423 21:00:20.930865 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-vhx5f with fingerprint=b6266ea34798f7e65cdf2e1f9badaa0cf7e535245c185c0356cf509e7e9f50f7 I0423 21:00:20.930968 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-765fb64d45-km6pw with fingerprint=0929647a90eba91c1c5014731201b6f7399d17fa3c43cf8d875e3a8c6716eca9 I0423 21:00:20.931053 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-765fb64d45-rj9pb with fingerprint=8cc7a086345a77eff94c56f37890e5a2eb1e285dcc2162e7bb27480b8808b1cd I0423 21:00:20.931135 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7986cd97f8-t2ngh with fingerprint=325a89f0d7410962033925b602898d4d3bf200ebf71da629a89eb8c6a66b268e I0423 21:00:20.931189 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-cj5gl with fingerprint=e71c5dc00252b126e7da5172aa69f2651a9053fe7e13175543ebdb8c11c15f69 I0423 21:00:20.931244 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-v74lx with fingerprint=1d0cfc62c53be86fdf543d3dc3b1a781e0f7bbf4ae9d831f06042000223343c1 I0423 21:00:20.931298 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-xpqsg with fingerprint=9ac947bfc791fb4aab0453445cf4754ee1e05b1939152926ccb1a07dd5e70f88 I0423 21:00:20.931313 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-llxmz/migrator_current.log with fingerprint=22b51ebf3479880edab657c7f908539604b55b957c06ef8a50824620a576cd63 I0423 21:00:20.931333 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-llxmz/graceful-termination_current.log with fingerprint=1fda2c4626529b26e971d7ca9f3241d42a0b1dfa372187d309c2991f686f0ed5 I0423 21:00:20.931437 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-74848b4cb9-h6sww/kube-storage-version-migrator-operator_current.log with fingerprint=807525fc3a35cd2ee0b7666f18a4fe0a1f6dfc3e030dc20593c81433ee028486 I0423 21:00:20.931447 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 6.061560517s to process 20 records I0423 21:00:24.975240 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 21:00:27.602670 1 tasks_processing.go:74] worker 12 stopped. I0423 21:00:27.602711 1 recorder.go:75] Recording config/installplans with fingerprint=f17dbfacc3bfddf27ca3b213b39495434cd4c4e9e3dbd69566ffb3845bbcf539 I0423 21:00:27.602723 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.733910971s to process 1 records I0423 21:00:28.276502 1 tasks_processing.go:74] worker 47 stopped. I0423 21:00:28.276755 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=0088e0a32371311f7e1870b984c632b0a0bea653dc12cdf0ea7947cd6c4546f3 I0423 21:00:28.276770 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.408459874s to process 1 records E0423 21:00:28.276824 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.412s with: function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0423 21:00:28.277931 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0423 21:00:28.277945 1 periodic.go:209] Running workloads gatherer I0423 21:00:28.277959 1 tasks_processing.go:45] number of workers: 2 I0423 21:00:28.277965 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 21:00:28.277969 1 tasks_processing.go:71] worker 1 working on workload_info task. I0423 21:00:28.277981 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 21:00:28.277995 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0423 21:00:28.308536 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0423 21:00:28.317730 1 tasks_processing.go:74] worker 0 stopped. I0423 21:00:28.317745 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 39.722251ms to process 0 records I0423 21:00:28.321713 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (14ms) I0423 21:00:28.338687 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (17ms) I0423 21:00:28.355294 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (17ms) I0423 21:00:28.369745 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (14ms) I0423 21:00:28.381281 1 gather_workloads_info.go:387] No image sha256:bffebf689df67af02a06b97d3d9bfcfdccd597350d132171ee68f3d0ed29a3f6 (12ms) I0423 21:00:28.392070 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (11ms) I0423 21:00:28.402719 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (11ms) I0423 21:00:28.413023 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (10ms) I0423 21:00:28.422820 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (10ms) I0423 21:00:28.431622 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (9ms) I0423 21:00:28.441139 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (10ms) I0423 21:00:28.527746 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (87ms) I0423 21:00:28.618742 1 gather_workloads_info.go:387] No image sha256:ce98d5d844bfc2ba8de1893866ad38166c95157d54abd8192b181e819bc50bb5 (91ms) I0423 21:00:28.719262 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (101ms) I0423 21:00:28.818934 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (100ms) I0423 21:00:28.918566 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (100ms) I0423 21:00:29.018880 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (100ms) I0423 21:00:29.099160 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 21:00:29.117892 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (99ms) I0423 21:00:29.218420 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (101ms) I0423 21:00:29.303527 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 21:00:29.319755 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (101ms) I0423 21:00:29.420587 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (101ms) I0423 21:00:29.518901 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (98ms) I0423 21:00:29.618420 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0423 21:00:29.718377 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (100ms) I0423 21:00:29.818769 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (100ms) I0423 21:00:29.921092 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (102ms) I0423 21:00:30.018100 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (97ms) I0423 21:00:30.118108 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (100ms) I0423 21:00:30.218737 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (101ms) I0423 21:00:30.317925 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (99ms) I0423 21:00:30.418666 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (101ms) I0423 21:00:30.518871 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (100ms) I0423 21:00:30.618487 1 gather_workloads_info.go:387] No image sha256:25bda9d34e23ba6bc9df5cf7104bdb237269740a3a76ee025f88b34e68b7e2b5 (100ms) I0423 21:00:30.720080 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (102ms) I0423 21:00:30.819098 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (99ms) I0423 21:00:30.819123 1 tasks_processing.go:74] worker 1 stopped. E0423 21:00:30.819133 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0423 21:00:30.819463 1 recorder.go:75] Recording config/workload_info with fingerprint=984251330e0b8b9a17c7cba0ec1921cb40e6591e17c799de9474533fac7abfd0 I0423 21:00:30.819477 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.541148023s to process 1 records E0423 21:00:30.819499 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.541s with: function \"workload_info\" failed with an error" I0423 21:00:30.820598 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0423 21:00:30.820611 1 periodic.go:209] Running conditional gatherer I0423 21:00:30.827152 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0423 21:00:30.833483 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.15:57072->172.30.0.10:53: read: connection refused E0423 21:00:30.833713 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0423 21:00:30.833767 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0423 21:00:30.840657 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0423 21:00:30.840669 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840674 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840677 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840680 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840683 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840686 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840688 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840691 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0423 21:00:30.840694 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0423 21:00:30.840709 1 tasks_processing.go:45] number of workers: 3 I0423 21:00:30.840720 1 tasks_processing.go:69] worker 2 listening for tasks. I0423 21:00:30.840723 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0423 21:00:30.840730 1 tasks_processing.go:69] worker 0 listening for tasks. I0423 21:00:30.840741 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0423 21:00:30.840741 1 tasks_processing.go:69] worker 1 listening for tasks. I0423 21:00:30.840746 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0423 21:00:30.840749 1 tasks_processing.go:74] worker 1 stopped. I0423 21:00:30.840797 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0423 21:00:30.840808 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.255µs to process 1 records I0423 21:00:30.840838 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0423 21:00:30.840846 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.197µs to process 1 records I0423 21:00:30.840851 1 tasks_processing.go:74] worker 0 stopped. I0423 21:00:30.840982 1 tasks_processing.go:74] worker 2 stopped. I0423 21:00:30.840995 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 229.184µs to process 0 records I0423 21:00:30.841027 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.15:57072->172.30.0.10:53: read: connection refused I0423 21:00:30.841044 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0423 21:00:30.862814 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=a96f373fa8d10b9dca205b060fb56866ce67eb479e5ca45dcd868ccdddd73aa7 I0423 21:00:30.862929 1 diskrecorder.go:70] Writing 121 records to /var/lib/insights-operator/insights-2026-04-23-210030.tar.gz I0423 21:00:30.870283 1 diskrecorder.go:51] Wrote 121 records to disk in 7ms I0423 21:00:30.870312 1 periodic.go:278] Gathering cluster info every 2h0m0s I0423 21:00:30.870326 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0423 21:00:45.187989 1 configmapobserver.go:84] configmaps "insights-config" not found I0423 21:01:33.818747 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="efb16de14ca3c4639e80529e2597ccd533757f40c053ce7ef1abc6d4121c7da2") W0423 21:01:33.818785 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0423 21:01:33.818837 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="3f9b81db4ab4054edfa3079e3f80ff720a89a0031cfe8e5f31e0b4e91a3df3ef") I0423 21:01:33.818861 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0423 21:01:33.818893 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0423 21:01:33.818921 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0423 21:01:33.818928 1 base_controller.go:181] Shutting down ConfigController ... I0423 21:01:33.818945 1 genericapiserver.go:651] "[graceful-termination] not going to wait for active watch request(s) to drain" I0423 21:01:33.818948 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="db6f2805d412f29030f79c4e9ef30f96731afb372690113c3131933a43a8867a") I0423 21:01:33.818959 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0423 21:01:33.818953 1 periodic.go:170] Shutting down I0423 21:01:33.818963 1 base_controller.go:181] Shutting down LoggingSyncer ... I0423 21:01:33.818965 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0423 21:01:33.818976 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0423 21:01:33.818987 1 base_controller.go:113] All ConfigController workers have been terminated I0423 21:01:33.819000 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController