W0506 20:12:50.826223 1 cmd.go:257] Using insecure, self-signed certificates I0506 20:12:51.971561 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:12:51.971872 1 observer_polling.go:159] Starting file observer I0506 20:12:52.339220 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0506 20:12:52.339436 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0506 20:12:52.340013 1 secure_serving.go:57] Forcing use of http/1.1 only W0506 20:12:52.340048 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. I0506 20:12:52.340047 1 simple_featuregate_reader.go:171] Starting feature-gate-detector W0506 20:12:52.340057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0506 20:12:52.340090 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0506 20:12:52.340095 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0506 20:12:52.340100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0506 20:12:52.340103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0506 20:12:52.345482 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0506 20:12:52.345492 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"46af165a-342d-457b-a262-0dff1c928629", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0506 20:12:52.349217 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0506 20:12:52.349236 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0506 20:12:52.349230 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0506 20:12:52.349244 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0506 20:12:52.349268 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0506 20:12:52.349273 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0506 20:12:52.349561 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-606839391/tls.crt::/tmp/serving-cert-606839391/tls.key" I0506 20:12:52.349749 1 secure_serving.go:213] Serving securely on [::]:8443 I0506 20:12:52.349776 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0506 20:12:52.358007 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0506 20:12:52.358034 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0506 20:12:52.358143 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0506 20:12:52.365589 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0506 20:12:52.365617 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0506 20:12:52.370991 1 secretconfigobserver.go:119] support secret does not exist I0506 20:12:52.376195 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0506 20:12:52.380820 1 secretconfigobserver.go:119] support secret does not exist I0506 20:12:52.384485 1 recorder.go:161] Pruning old reports every 4h11m9s, max age is 288h0m0s I0506 20:12:52.391085 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0506 20:12:52.391098 1 periodic.go:209] Running clusterconfig gatherer I0506 20:12:52.391105 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0506 20:12:52.391151 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0506 20:12:52.391169 1 insightsreport.go:296] Starting report retriever I0506 20:12:52.391168 1 tasks_processing.go:45] number of workers: 64 I0506 20:12:52.391177 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0506 20:12:52.391191 1 tasks_processing.go:69] worker 2 listening for tasks. I0506 20:12:52.391200 1 tasks_processing.go:69] worker 1 listening for tasks. I0506 20:12:52.391199 1 tasks_processing.go:69] worker 8 listening for tasks. I0506 20:12:52.391199 1 tasks_processing.go:69] worker 0 listening for tasks. I0506 20:12:52.391210 1 tasks_processing.go:69] worker 23 listening for tasks. I0506 20:12:52.391206 1 tasks_processing.go:69] worker 3 listening for tasks. I0506 20:12:52.391216 1 tasks_processing.go:69] worker 14 listening for tasks. I0506 20:12:52.391211 1 tasks_processing.go:69] worker 4 listening for tasks. I0506 20:12:52.391211 1 tasks_processing.go:69] worker 13 listening for tasks. I0506 20:12:52.391217 1 tasks_processing.go:69] worker 5 listening for tasks. I0506 20:12:52.391225 1 tasks_processing.go:69] worker 6 listening for tasks. I0506 20:12:52.391222 1 tasks_processing.go:69] worker 15 listening for tasks. I0506 20:12:52.391219 1 tasks_processing.go:69] worker 9 listening for tasks. I0506 20:12:52.391232 1 tasks_processing.go:69] worker 7 listening for tasks. I0506 20:12:52.391232 1 tasks_processing.go:69] worker 31 listening for tasks. I0506 20:12:52.391241 1 tasks_processing.go:69] worker 11 listening for tasks. I0506 20:12:52.391238 1 tasks_processing.go:71] worker 7 working on machine_configs task. I0506 20:12:52.391248 1 tasks_processing.go:69] worker 19 listening for tasks. I0506 20:12:52.391249 1 tasks_processing.go:69] worker 53 listening for tasks. I0506 20:12:52.391253 1 tasks_processing.go:69] worker 12 listening for tasks. I0506 20:12:52.391257 1 tasks_processing.go:69] worker 20 listening for tasks. I0506 20:12:52.391259 1 tasks_processing.go:69] worker 28 listening for tasks. I0506 20:12:52.391262 1 tasks_processing.go:71] worker 20 working on networks task. I0506 20:12:52.391263 1 tasks_processing.go:71] worker 12 working on sap_pods task. I0506 20:12:52.391266 1 tasks_processing.go:69] worker 54 listening for tasks. I0506 20:12:52.391267 1 tasks_processing.go:69] worker 26 listening for tasks. I0506 20:12:52.391268 1 tasks_processing.go:69] worker 29 listening for tasks. I0506 20:12:52.391273 1 tasks_processing.go:69] worker 30 listening for tasks. I0506 20:12:52.391268 1 tasks_processing.go:69] worker 52 listening for tasks. I0506 20:12:52.391281 1 tasks_processing.go:69] worker 60 listening for tasks. I0506 20:12:52.391274 1 tasks_processing.go:69] worker 21 listening for tasks. I0506 20:12:52.391286 1 tasks_processing.go:69] worker 61 listening for tasks. I0506 20:12:52.391258 1 tasks_processing.go:71] worker 53 working on container_runtime_configs task. I0506 20:12:52.391295 1 tasks_processing.go:71] worker 0 working on operators_pods_and_events task. I0506 20:12:52.391297 1 tasks_processing.go:69] worker 62 listening for tasks. I0506 20:12:52.391299 1 tasks_processing.go:69] worker 58 listening for tasks. I0506 20:12:52.391303 1 tasks_processing.go:69] worker 63 listening for tasks. I0506 20:12:52.391299 1 tasks_processing.go:69] worker 22 listening for tasks. I0506 20:12:52.391308 1 tasks_processing.go:69] worker 59 listening for tasks. I0506 20:12:52.391312 1 tasks_processing.go:71] worker 2 working on openstack_dataplanedeployments task. I0506 20:12:52.391312 1 tasks_processing.go:69] worker 34 listening for tasks. I0506 20:12:52.391302 1 tasks_processing.go:69] worker 33 listening for tasks. I0506 20:12:52.391316 1 tasks_processing.go:71] worker 15 working on proxies task. I0506 20:12:52.391316 1 tasks_processing.go:71] worker 6 working on openshift_logging task. I0506 20:12:52.391328 1 tasks_processing.go:69] worker 43 listening for tasks. I0506 20:12:52.391326 1 tasks_processing.go:71] worker 13 working on oauths task. I0506 20:12:52.391334 1 tasks_processing.go:69] worker 44 listening for tasks. I0506 20:12:52.391336 1 tasks_processing.go:69] worker 36 listening for tasks. I0506 20:12:52.391326 1 tasks_processing.go:69] worker 35 listening for tasks. I0506 20:12:52.391240 1 tasks_processing.go:71] worker 9 working on operators task. I0506 20:12:52.391353 1 tasks_processing.go:71] worker 23 working on container_images task. I0506 20:12:52.391231 1 tasks_processing.go:69] worker 17 listening for tasks. I0506 20:12:52.391245 1 tasks_processing.go:69] worker 27 listening for tasks. I0506 20:12:52.391246 1 tasks_processing.go:71] worker 31 working on jaegers task. I0506 20:12:52.391356 1 tasks_processing.go:71] worker 14 working on ceph_cluster task. I0506 20:12:52.391377 1 tasks_processing.go:69] worker 41 listening for tasks. I0506 20:12:52.391383 1 tasks_processing.go:71] worker 41 working on lokistack task. I0506 20:12:52.391384 1 tasks_processing.go:71] worker 27 working on mutating_webhook_configurations task. I0506 20:12:52.391259 1 tasks_processing.go:71] worker 1 working on sap_datahubs task. I0506 20:12:52.391404 1 tasks_processing.go:69] worker 42 listening for tasks. I0506 20:12:52.391415 1 tasks_processing.go:69] worker 48 listening for tasks. I0506 20:12:52.391423 1 tasks_processing.go:69] worker 45 listening for tasks. I0506 20:12:52.391252 1 tasks_processing.go:71] worker 19 working on support_secret task. I0506 20:12:52.391438 1 tasks_processing.go:69] worker 47 listening for tasks. I0506 20:12:52.391446 1 tasks_processing.go:71] worker 5 working on ingress task. I0506 20:12:52.391251 1 tasks_processing.go:69] worker 24 listening for tasks. I0506 20:12:52.391260 1 tasks_processing.go:69] worker 25 listening for tasks. I0506 20:12:52.391274 1 tasks_processing.go:69] worker 55 listening for tasks. I0506 20:12:52.391279 1 tasks_processing.go:69] worker 56 listening for tasks. I0506 20:12:52.391287 1 tasks_processing.go:71] worker 4 working on nodenetworkstates task. I0506 20:12:52.391226 1 tasks_processing.go:69] worker 16 listening for tasks. I0506 20:12:52.391579 1 tasks_processing.go:71] worker 55 working on machines task. I0506 20:12:52.391291 1 tasks_processing.go:69] worker 57 listening for tasks. I0506 20:12:52.391728 1 tasks_processing.go:71] worker 57 working on nodenetworkconfigurationpolicies task. I0506 20:12:52.391290 1 tasks_processing.go:71] worker 8 working on dvo_metrics task. I0506 20:12:52.391355 1 tasks_processing.go:69] worker 38 listening for tasks. I0506 20:12:52.391227 1 tasks_processing.go:69] worker 10 listening for tasks. I0506 20:12:52.391339 1 tasks_processing.go:71] worker 3 working on machine_autoscalers task. I0506 20:12:52.391865 1 tasks_processing.go:71] worker 10 working on pod_network_connectivity_checks task. I0506 20:12:52.391291 1 tasks_processing.go:69] worker 32 listening for tasks. I0506 20:12:52.391881 1 tasks_processing.go:71] worker 32 working on image_registries task. I0506 20:12:52.391346 1 tasks_processing.go:69] worker 37 listening for tasks. I0506 20:12:52.391370 1 tasks_processing.go:69] worker 40 listening for tasks. I0506 20:12:52.391432 1 tasks_processing.go:69] worker 46 listening for tasks. I0506 20:12:52.391535 1 tasks_processing.go:71] worker 58 working on openstack_dataplanenodesets task. I0506 20:12:52.391994 1 tasks_processing.go:71] worker 38 working on cost_management_metrics_configs task. I0506 20:12:52.391362 1 tasks_processing.go:69] worker 39 listening for tasks. I0506 20:12:52.391545 1 tasks_processing.go:71] worker 54 working on cluster_apiserver task. I0506 20:12:52.391549 1 tasks_processing.go:71] worker 26 working on schedulers task. I0506 20:12:52.392263 1 tasks_processing.go:71] worker 39 working on service_accounts task. I0506 20:12:52.391252 1 tasks_processing.go:71] worker 11 working on feature_gates task. I0506 20:12:52.392291 1 tasks_processing.go:71] worker 37 working on storage_classes task. I0506 20:12:52.392295 1 tasks_processing.go:71] worker 40 working on openstack_version task. I0506 20:12:52.392336 1 tasks_processing.go:71] worker 46 working on machine_sets task. I0506 20:12:52.391552 1 tasks_processing.go:71] worker 29 working on sap_config task. I0506 20:12:52.391540 1 tasks_processing.go:71] worker 28 working on active_alerts task. W0506 20:12:52.392781 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:12:52.392805 1 tasks_processing.go:71] worker 28 working on version task. I0506 20:12:52.391557 1 tasks_processing.go:71] worker 48 working on crds task. I0506 20:12:52.391555 1 tasks_processing.go:71] worker 30 working on clusterroles task. I0506 20:12:52.392886 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 44.21µs to process 0 records I0506 20:12:52.391559 1 tasks_processing.go:71] worker 52 working on config_maps task. I0506 20:12:52.391557 1 tasks_processing.go:71] worker 56 working on image_pruners task. I0506 20:12:52.391561 1 tasks_processing.go:69] worker 50 listening for tasks. I0506 20:12:52.391561 1 tasks_processing.go:71] worker 45 working on authentication task. I0506 20:12:52.393266 1 tasks_processing.go:71] worker 50 working on install_plans task. I0506 20:12:52.391564 1 tasks_processing.go:71] worker 60 working on nodes task. I0506 20:12:52.391566 1 tasks_processing.go:71] worker 47 working on machine_config_pools task. I0506 20:12:52.391568 1 tasks_processing.go:71] worker 21 working on certificate_signing_requests task. I0506 20:12:52.391562 1 tasks_processing.go:71] worker 42 working on machine_healthchecks task. I0506 20:12:52.391571 1 tasks_processing.go:71] worker 24 working on tsdb_status task. I0506 20:12:52.391573 1 tasks_processing.go:71] worker 61 working on infrastructures task. W0506 20:12:52.393495 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:12:52.393544 1 tasks_processing.go:71] worker 24 working on pdbs task. I0506 20:12:52.391573 1 tasks_processing.go:71] worker 33 working on node_logs task. I0506 20:12:52.393593 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 64.417µs to process 0 records I0506 20:12:52.391575 1 tasks_processing.go:71] worker 25 working on aggregated_monitoring_cr_names task. I0506 20:12:52.391577 1 tasks_processing.go:71] worker 63 working on silenced_alerts task. W0506 20:12:52.393693 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:12:52.393708 1 tasks_processing.go:71] worker 63 working on monitoring_persistent_volumes task. I0506 20:12:52.391569 1 tasks_processing.go:69] worker 49 listening for tasks. I0506 20:12:52.391582 1 tasks_processing.go:71] worker 36 working on metrics task. I0506 20:12:52.393774 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 29.181µs to process 0 records W0506 20:12:52.393790 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:12:52.391577 1 tasks_processing.go:71] worker 62 working on openstack_controlplanes task. I0506 20:12:52.393802 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 23.344µs to process 0 records I0506 20:12:52.391587 1 tasks_processing.go:71] worker 16 working on olm_operators task. I0506 20:12:52.393884 1 tasks_processing.go:71] worker 49 working on overlapping_namespace_uids task. I0506 20:12:52.391587 1 tasks_processing.go:71] worker 43 working on validating_webhook_configurations task. I0506 20:12:52.391586 1 tasks_processing.go:71] worker 59 working on qemu_kubevirt_launcher_logs task. I0506 20:12:52.391588 1 tasks_processing.go:71] worker 34 working on openshift_machine_api_events task. I0506 20:12:52.391243 1 tasks_processing.go:69] worker 18 listening for tasks. I0506 20:12:52.395695 1 tasks_processing.go:74] worker 18 stopped. I0506 20:12:52.391593 1 tasks_processing.go:71] worker 44 working on storage_cluster task. I0506 20:12:52.391593 1 tasks_processing.go:71] worker 35 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0506 20:12:52.391599 1 tasks_processing.go:71] worker 17 working on ingress_certificates task. I0506 20:12:52.391601 1 tasks_processing.go:69] worker 51 listening for tasks. I0506 20:12:52.391582 1 tasks_processing.go:71] worker 22 working on image task. I0506 20:12:52.393879 1 tasks_processing.go:74] worker 36 stopped. I0506 20:12:52.396449 1 tasks_processing.go:74] worker 51 stopped. I0506 20:12:52.397147 1 tasks_processing.go:74] worker 53 stopped. I0506 20:12:52.397161 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 5.845942ms to process 0 records I0506 20:12:52.397452 1 tasks_processing.go:74] worker 2 stopped. I0506 20:12:52.397464 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 6.128011ms to process 0 records I0506 20:12:52.397472 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 6.115523ms to process 0 records I0506 20:12:52.397477 1 tasks_processing.go:74] worker 6 stopped. I0506 20:12:52.399187 1 tasks_processing.go:74] worker 31 stopped. I0506 20:12:52.399197 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 7.811737ms to process 0 records I0506 20:12:52.399203 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 7.187037ms to process 0 records I0506 20:12:52.399208 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 7.42699ms to process 0 records I0506 20:12:52.399213 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 7.79627ms to process 0 records I0506 20:12:52.399214 1 tasks_processing.go:74] worker 38 stopped. I0506 20:12:52.399220 1 tasks_processing.go:74] worker 58 stopped. I0506 20:12:52.399217 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 7.28355ms to process 0 records I0506 20:12:52.399224 1 tasks_processing.go:74] worker 1 stopped. I0506 20:12:52.399224 1 tasks_processing.go:74] worker 3 stopped. I0506 20:12:52.399268 1 tasks_processing.go:74] worker 4 stopped. I0506 20:12:52.399285 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 7.768692ms to process 0 records E0506 20:12:52.399298 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0506 20:12:52.399308 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 7.419826ms to process 0 records I0506 20:12:52.399316 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 7.911287ms to process 0 records I0506 20:12:52.399323 1 tasks_processing.go:74] worker 41 stopped. I0506 20:12:52.399329 1 tasks_processing.go:74] worker 10 stopped. I0506 20:12:52.399439 1 tasks_processing.go:74] worker 15 stopped. I0506 20:12:52.399471 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0506 20:12:52.399490 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0506 20:12:52.399500 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0506 20:12:52.399504 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0506 20:12:52.399523 1 controller.go:489] The operator is still being initialized I0506 20:12:52.399534 1 controller.go:512] The operator is healthy I0506 20:12:52.399563 1 recorder.go:75] Recording config/proxy with fingerprint=230c3bcc7f48338e94eed95dee231adae41027de6313100ee12dd2cf36f12fd4 I0506 20:12:52.399580 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 8.099943ms to process 1 records I0506 20:12:52.399670 1 tasks_processing.go:74] worker 20 stopped. I0506 20:12:52.399850 1 recorder.go:75] Recording config/network with fingerprint=3334018581cc9a8fdedfb1ba1a54dab9406a9aa19d782215e29a8ca006150513 I0506 20:12:52.399868 1 gather.go:177] gatherer "clusterconfig" function "networks" took 8.264598ms to process 1 records I0506 20:12:52.404893 1 tasks_processing.go:74] worker 62 stopped. I0506 20:12:52.404907 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 11.07868ms to process 0 records I0506 20:12:52.408324 1 tasks_processing.go:74] worker 5 stopped. I0506 20:12:52.408462 1 recorder.go:75] Recording config/ingress with fingerprint=8ecc88d06be2cd0a25dec215da1c5bd592571c04978a1302f3e85df2ec5052a7 I0506 20:12:52.408475 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 16.865926ms to process 1 records I0506 20:12:52.411931 1 tasks_processing.go:74] worker 46 stopped. I0506 20:12:52.411947 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 19.577943ms to process 0 records I0506 20:12:52.411965 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 20.559096ms to process 0 records I0506 20:12:52.411973 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 19.648356ms to process 0 records I0506 20:12:52.411980 1 tasks_processing.go:74] worker 14 stopped. I0506 20:12:52.411983 1 tasks_processing.go:74] worker 40 stopped. I0506 20:12:52.412051 1 tasks_processing.go:74] worker 55 stopped. E0506 20:12:52.412061 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0506 20:12:52.412069 1 gather.go:177] gatherer "clusterconfig" function "machines" took 20.423859ms to process 0 records I0506 20:12:52.425352 1 tasks_processing.go:74] worker 44 stopped. I0506 20:12:52.425388 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 29.633391ms to process 0 records I0506 20:12:52.425445 1 tasks_processing.go:74] worker 19 stopped. E0506 20:12:52.425458 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0506 20:12:52.425466 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 34.003978ms to process 0 records I0506 20:12:52.425776 1 tasks_processing.go:74] worker 11 stopped. I0506 20:12:52.425885 1 recorder.go:75] Recording config/featuregate with fingerprint=c29be0977c6a7b7a4909c97ebc96ed8abc472c9bb8dda739c4e5c162d2c86997 I0506 20:12:52.425897 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 33.476986ms to process 1 records I0506 20:12:52.425982 1 tasks_processing.go:74] worker 13 stopped. I0506 20:12:52.426078 1 recorder.go:75] Recording config/oauth with fingerprint=2c600416a7c79bc511a77376aa320f58afa892d42d414db7537d60a58ab23c66 I0506 20:12:52.426087 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 34.448713ms to process 1 records I0506 20:12:52.426723 1 tasks_processing.go:74] worker 57 stopped. I0506 20:12:52.426731 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 34.983404ms to process 0 records I0506 20:12:52.426755 1 tasks_processing.go:74] worker 29 stopped. I0506 20:12:52.426774 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 34.317282ms to process 0 records I0506 20:12:52.426831 1 tasks_processing.go:74] worker 24 stopped. I0506 20:12:52.426939 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=e5c3f294920d31d8b0f77d20b2bc9ccb3a4fd325f34507038edc966c36181a7f I0506 20:12:52.426967 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=3aaf76ab0d893852681353acf327516574003a4a5245715601e15f9aff60ff8e I0506 20:12:52.426985 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=c50b26ef304257993a44174e2f0bc697efb2fdb2b7dbe3d9d94f444e71005f6b I0506 20:12:52.426993 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 33.279771ms to process 3 records E0506 20:12:52.427000 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0506 20:12:52.427006 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 33.409297ms to process 0 records I0506 20:12:52.427016 1 tasks_processing.go:74] worker 42 stopped. I0506 20:12:52.427047 1 tasks_processing.go:74] worker 34 stopped. I0506 20:12:52.427067 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 32.258613ms to process 0 records I0506 20:12:52.427292 1 tasks_processing.go:74] worker 33 stopped. I0506 20:12:52.427306 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 33.714011ms to process 0 records I0506 20:12:52.427397 1 tasks_processing.go:74] worker 37 stopped. I0506 20:12:52.427442 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=e2a8d6a3c9e201f107b33463bcbb1a110501dd7e0b38ac7b44b00e0372f77978 I0506 20:12:52.427472 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=dc130afe75bdfb038321882d4a223e3bde3b6b38bf09d574385cf0d5ae831a1a I0506 20:12:52.427484 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 35.01077ms to process 2 records I0506 20:12:52.427549 1 gather_logs.go:145] no pods in namespace were found I0506 20:12:52.427632 1 tasks_processing.go:74] worker 54 stopped. I0506 20:12:52.427643 1 recorder.go:75] Recording config/apiserver with fingerprint=36b1a5145b72914881de81dcce5a3e16640725b4fccadbb6f59086530ad9fd08 I0506 20:12:52.427658 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 35.276927ms to process 1 records I0506 20:12:52.427743 1 tasks_processing.go:74] worker 45 stopped. I0506 20:12:52.427827 1 recorder.go:75] Recording config/authentication with fingerprint=0785ce8e7eb1c6304ba9161bac415e5df4178405a4bd7bd9274955e7d13d4428 I0506 20:12:52.427836 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 34.12008ms to process 1 records I0506 20:12:52.427914 1 tasks_processing.go:74] worker 61 stopped. I0506 20:12:52.428302 1 recorder.go:75] Recording config/infrastructure with fingerprint=bba2c1ecbfd13c7b049ac1bf7ac808ca14da24b3c1154592aeca2e6e7aad80f6 I0506 20:12:52.428312 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 33.980809ms to process 1 records I0506 20:12:52.428318 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 33.805819ms to process 0 records I0506 20:12:52.428322 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 32.880816ms to process 0 records I0506 20:12:52.428390 1 tasks_processing.go:74] worker 63 stopped. I0506 20:12:52.428401 1 tasks_processing.go:74] worker 59 stopped. I0506 20:12:52.428406 1 recorder.go:75] Recording config/image with fingerprint=52ffab0712c64f24179dadc758cc358a542271e45aafa09e39fb22e3fd89a147 I0506 20:12:52.428414 1 gather.go:177] gatherer "clusterconfig" function "image" took 31.596793ms to process 1 records I0506 20:12:52.428441 1 tasks_processing.go:74] worker 22 stopped. I0506 20:12:52.428455 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=08a7bb097cc41a41316de6f941624ace467c054d075bbca58a079d325930d948 I0506 20:12:52.428463 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 35.578575ms to process 1 records I0506 20:12:52.428468 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 33.971061ms to process 0 records I0506 20:12:52.428472 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 37.196442ms to process 0 records I0506 20:12:52.428477 1 tasks_processing.go:74] worker 26 stopped. I0506 20:12:52.428480 1 tasks_processing.go:74] worker 12 stopped. I0506 20:12:52.428484 1 tasks_processing.go:74] worker 16 stopped. I0506 20:12:52.428546 1 tasks_processing.go:74] worker 27 stopped. I0506 20:12:52.428705 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=480cb96ef99eab5c06b0b5d8224a1908d170db5e7b825742a416ea57f9672fcd I0506 20:12:52.428742 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=89a3a82451d435c2ef26ece8054f7663460a0f61ad3da73551b75e7837555723 I0506 20:12:52.428767 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=7785ee1cf77007513b732c8ddf6c0f7da0e423a8dd695a4daa65cde0244f9742 I0506 20:12:52.428777 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 37.140677ms to process 3 records I0506 20:12:52.429784 1 tasks_processing.go:74] worker 32 stopped. I0506 20:12:52.430275 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=f7a3429f8bdfd3b2ab10b0acba9a7d840686f17a51a255e3ad7f257152fab65b I0506 20:12:52.430291 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 37.895262ms to process 1 records W0506 20:12:52.432181 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0506 20:12:52.434716 1 tasks_processing.go:74] worker 43 stopped. I0506 20:12:52.434966 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=c48d7c93a0647888dabc6e0dc5a4e52750815d6ea44c9257973c14770c17b5f0 I0506 20:12:52.435133 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=eed08a700ec4bb0518ce6824bb34bd7126c3f7d37d5409e413e3c6830e2b5110 I0506 20:12:52.435179 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=a6276ff236f86af458c42fb88420b16fd9fb203a04ea16cacbc69b5df82283ab I0506 20:12:52.435223 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=800355d71fa98ef0e4452e776c1e6be74a2da9f1f1df05c26cca5a6f6ee6c7ee I0506 20:12:52.435268 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=1787b1525d8056d4bd26e9fc0e4ac02596c6b7aa9e1cda8a30de8c02589e2a4b I0506 20:12:52.435312 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=66ccee2ade9dc2a66b7eee4ee132ec899b58cdcf04e59b833e73b3976e166f1f I0506 20:12:52.435358 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=5f9740ad1ff6b44a1b5f82afd3d52a5991d56ac5b23b82e00d18095c479c51e2 I0506 20:12:52.435425 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=6676fd913eab330d5d79570f9d694212ddbfdba07da5472705961b7bc15a84cb I0506 20:12:52.435474 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=79a3eb78c6f85354a9cedbbbdfaa0113d02fa5d72ea303029544a0df62c65be9 I0506 20:12:52.435517 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=cb48691f5826e991afb688f1b56b065ded182dc1801a67c90690d771a0f5ac29 I0506 20:12:52.435578 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=c6e5f520d2cf28b5d3c5c24d0d8038fed1ad7a1b8e00bc2fe1a4ca88538e0e96 I0506 20:12:52.435594 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 40.203224ms to process 11 records I0506 20:12:52.435694 1 tasks_processing.go:74] worker 60 stopped. I0506 20:12:52.436170 1 recorder.go:75] Recording config/node/ip-10-0-0-242.ec2.internal with fingerprint=268e36cbadc6a591931f2bb717508b20e100b19e64e55128ba524f16185c6ae6 I0506 20:12:52.436268 1 recorder.go:75] Recording config/node/ip-10-0-1-5.ec2.internal with fingerprint=4c700bc6c090a5c5a3d8c1fa425740d5fa0c25f8a58183b5ba3f7674fcdf6686 I0506 20:12:52.436377 1 recorder.go:75] Recording config/node/ip-10-0-2-156.ec2.internal with fingerprint=8e08a86b69a81143434cf61d0a303300d5530118735868d0aea6e2db5e479d8f I0506 20:12:52.436410 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 41.509397ms to process 3 records I0506 20:12:52.436439 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0506 20:12:52.436450 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 41.100953ms to process 1 records I0506 20:12:52.436459 1 tasks_processing.go:74] worker 49 stopped. I0506 20:12:52.437297 1 tasks_processing.go:74] worker 21 stopped. I0506 20:12:52.437317 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 43.898595ms to process 0 records I0506 20:12:52.446732 1 tasks_processing.go:74] worker 30 stopped. I0506 20:12:52.446948 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=cd0395ec2ebac8e5e39ee091ee3dc1565da06606f9439f4bd9983b486d737ed4 I0506 20:12:52.447033 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=6af41940726f270021fdaad1a22cd1341418de09b90f62e3e6dd799bd3a6c452 I0506 20:12:52.447042 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 53.846811ms to process 2 records I0506 20:12:52.447815 1 tasks_processing.go:74] worker 28 stopped. I0506 20:12:52.448496 1 recorder.go:75] Recording config/version with fingerprint=889a85ab9a9205087c6c251358fee6600eb683ae6ab8951c4b54581304345cfa I0506 20:12:52.448530 1 recorder.go:75] Recording config/id with fingerprint=3642dad83afa831032d3d1b2901c658b1fa3489ee0df5affa44f51253f591cd6 I0506 20:12:52.448542 1 gather.go:177] gatherer "clusterconfig" function "version" took 54.995282ms to process 2 records I0506 20:12:52.448553 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 52.026882ms to process 0 records I0506 20:12:52.448577 1 tasks_processing.go:74] worker 35 stopped. I0506 20:12:52.449274 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0506 20:12:52.449363 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0506 20:12:52.449369 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0506 20:12:52.450385 1 tasks_processing.go:74] worker 48 stopped. I0506 20:12:52.450881 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=ea8bbe4a5a6f63b988b58358118437b3ce881607e79890d3c304affd383db0fa I0506 20:12:52.451102 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=4bd68c8528b9823c27666791425f8970c8c2854dcd683c741cc8c5a899f56567 I0506 20:12:52.451131 1 gather.go:177] gatherer "clusterconfig" function "crds" took 57.525002ms to process 2 records I0506 20:12:52.451750 1 tasks_processing.go:74] worker 23 stopped. I0506 20:12:52.451823 1 recorder.go:75] Recording config/running_containers with fingerprint=7a90a9a9e6409159a621774254cac1597eb201f15c65d5043c761f195fa598d0 I0506 20:12:52.451835 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 60.379411ms to process 1 records I0506 20:12:52.455598 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s W0506 20:12:52.455788 1 operator.go:288] started I0506 20:12:52.455615 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0506 20:12:52.456023 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0506 20:12:52.458401 1 base_controller.go:82] Caches are synced for ConfigController I0506 20:12:52.458418 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0506 20:12:52.467689 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0506 20:12:52.467703 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0506 20:12:52.467707 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0506 20:12:52.467710 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0506 20:12:52.467714 1 controller.go:212] Source scaController *sca.Controller is not ready I0506 20:12:52.467731 1 controller.go:489] The operator is still being initialized I0506 20:12:52.467736 1 controller.go:512] The operator is healthy I0506 20:12:52.474391 1 prometheus_rules.go:88] Prometheus rules successfully created I0506 20:12:52.475555 1 configmapobserver.go:84] configmaps "insights-config" not found I0506 20:12:52.485078 1 tasks_processing.go:74] worker 52 stopped. E0506 20:12:52.485107 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0506 20:12:52.485136 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0506 20:12:52.485145 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0506 20:12:52.485165 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0506 20:12:52.485221 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0506 20:12:52.485240 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0506 20:12:52.485251 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0506 20:12:52.485261 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0506 20:12:52.485313 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0506 20:12:52.485329 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0506 20:12:52.485338 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 92.076854ms to process 7 records E0506 20:12:52.488397 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27e6cdc002-ef62-46a5-aff0-d37155e541b4%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:36225->172.30.0.10:53: read: connection refused I0506 20:12:52.488416 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27e6cdc002-ef62-46a5-aff0-d37155e541b4%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:36225->172.30.0.10:53: read: connection refused I0506 20:12:52.503927 1 tasks_processing.go:74] worker 47 stopped. I0506 20:12:52.503987 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 110.56213ms to process 0 records I0506 20:12:52.525448 1 tasks_processing.go:74] worker 56 stopped. I0506 20:12:52.525555 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=ab5b009b573bbf22a0da7522a960c6ed40a2a75e940b8f572a291796acd59229 I0506 20:12:52.525570 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 132.347627ms to process 1 records I0506 20:12:52.529601 1 tasks_processing.go:74] worker 7 stopped. I0506 20:12:52.529635 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0506 20:12:52.529645 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 138.340686ms to process 1 records I0506 20:12:52.542045 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0506 20:12:52.545598 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:37815->172.30.0.10:53: read: connection refused I0506 20:12:52.545614 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:37815->172.30.0.10:53: read: connection refused I0506 20:12:52.556455 1 base_controller.go:82] Caches are synced for LoggingSyncer I0506 20:12:52.556473 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0506 20:12:52.563110 1 tasks_processing.go:74] worker 17 stopped. E0506 20:12:52.563153 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0506 20:12:52.563172 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2q4kp3bqsgojc2e8hh9s8tcb7dvfarpo-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2q4kp3bqsgojc2e8hh9s8tcb7dvfarpo-primary-cert-bundle-secret" not found I0506 20:12:52.563254 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=f33d641650ef4acff76c4cbeab8bac3c8dcf6a8d94227545f69d6a1d65b4da1f I0506 20:12:52.563272 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 167.187729ms to process 1 records I0506 20:12:52.572537 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0506 20:12:52.578652 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found W0506 20:12:53.430942 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0506 20:12:53.646397 1 gather_cluster_operator_pods_and_events.go:121] Found 20 pods with 24 containers I0506 20:12:53.646415 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1048576 bytes I0506 20:12:53.646733 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-6s27d pod in namespace openshift-dns (previous: false). I0506 20:12:53.737011 1 tasks_processing.go:74] worker 25 stopped. I0506 20:12:53.737036 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 1.343342355s to process 0 records I0506 20:12:53.878069 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6s27d pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-6s27d\" is waiting to start: ContainerCreating" I0506 20:12:53.878090 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-6s27d\" is waiting to start: ContainerCreating" I0506 20:12:53.878099 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-6s27d pod in namespace openshift-dns (previous: false). I0506 20:12:53.882310 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0506 20:12:54.050772 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6s27d pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-6s27d\" is waiting to start: ContainerCreating" I0506 20:12:54.050791 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-6s27d\" is waiting to start: ContainerCreating" I0506 20:12:54.050805 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-gdsvv pod in namespace openshift-dns (previous: false). I0506 20:12:54.269266 1 tasks_processing.go:74] worker 9 stopped. I0506 20:12:54.269323 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=0c067e1a8d91fcbb645f087f1bfbe3909c47fd1a9fe03fb1a88ae00b047ac60c I0506 20:12:54.269365 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=be3d1c1af66b4d028f615e21f95710d6f37e2199ef3a75cff14a8b387409aafe I0506 20:12:54.269396 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0506 20:12:54.269423 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=5e1082589d8ff6b742e7d29c51e85629352d6b67e93309fc427841c31637179d I0506 20:12:54.269442 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0506 20:12:54.269465 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=a2805216d589957b3354000e994d91c5aca8f4395e8f98ebd39539aca2a0e912 I0506 20:12:54.269513 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=2fd498f8437a71379764b34017b0ef4a86faf043677df1073d0cdb78071afd57 I0506 20:12:54.269537 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=fff3ada6e4babc0d031417c69c53cf7f6b94d1d29fa0e95c442a563f1dbecffd I0506 20:12:54.269551 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=25f410983e025f0177276ffb2c1eb8636df6563f518ed4474d36d1367c52bd0c I0506 20:12:54.269570 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=c1904bdeb403d7653e032cfb7e6ec52313f793ff4bb0280420e330d6556557ec I0506 20:12:54.269580 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0506 20:12:54.269596 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=7132608118e5f80116b4dde61cc6675004f865602392ae35038b99c67a02dc5b I0506 20:12:54.269606 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0506 20:12:54.269621 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=05b8505a1d74ce56a656a5ca2388b343e2a29dcb561e2cfe2d9dfaa5cd509796 I0506 20:12:54.269631 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0506 20:12:54.269675 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=bfd4760e757dcf20b2e7c1130951cd0b71583fa3dc731fe9cb2b83d59792c97d I0506 20:12:54.269687 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0506 20:12:54.269702 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=002108060c10dae41e281f12d07446d82c1547ad575ca64911280ba9942efc1b I0506 20:12:54.269850 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=32607ab238bb15c34394d15cd549e9ca04d62b06a243a3412213ff610c02b233 I0506 20:12:54.269861 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0506 20:12:54.269868 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0506 20:12:54.269890 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0506 20:12:54.269912 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=d7a956a27be9120369145d09b881dbfaf39ffb21430d7dc2aa4683b701c70dd0 I0506 20:12:54.269935 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=d95df89fb9362769d294c04d56e37cc22db27d95d6713a3d0226188f27a7333c I0506 20:12:54.269947 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0506 20:12:54.269964 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=b5d31961f0f7c16fb135a0abd09af1b2326e5c65ceea1201eae6072794343a65 I0506 20:12:54.269973 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0506 20:12:54.269986 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=ab067325e3a4aa515bffb352d92af871652c330457f7a1ce8369b568d73bb06f I0506 20:12:54.270001 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=b2d2a6ffe0b18d5c50bae268767b37c7eecdd80e608c7fcff2ab58b510a1229b I0506 20:12:54.270015 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=e4b303ecf51d1dd29caa2f5b390f54c31f0000e4d7d8aed7372cb3930d3d824a I0506 20:12:54.270030 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=3801f63d1da4a46e4fc57c8fe69fafb3fff560f595b2e0983c3b4f869673c11e I0506 20:12:54.270045 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=af7a998fbc859a084369d8755eac81a16429ad01f34f0869cd85fc0916f2f3af I0506 20:12:54.270069 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=43a491efbafd9dd9ddaf2efa43604cd4bc89e44ea503a2a2c99c9e86808a5083 I0506 20:12:54.270086 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0506 20:12:54.270094 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0506 20:12:54.270101 1 gather.go:177] gatherer "clusterconfig" function "operators" took 1.877895689s to process 35 records I0506 20:12:54.272623 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-gdsvv pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-gdsvv\" is waiting to start: ContainerCreating" I0506 20:12:54.272639 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-gdsvv\" is waiting to start: ContainerCreating" I0506 20:12:54.272647 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-gdsvv pod in namespace openshift-dns (previous: false). W0506 20:12:54.430964 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0506 20:12:54.447760 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-gdsvv pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-gdsvv\" is waiting to start: ContainerCreating" I0506 20:12:54.447778 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-gdsvv\" is waiting to start: ContainerCreating" I0506 20:12:54.447790 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-shtlg pod in namespace openshift-dns (previous: false). I0506 20:12:54.671340 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-shtlg pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-shtlg\" is waiting to start: ContainerCreating" I0506 20:12:54.671360 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-shtlg\" is waiting to start: ContainerCreating" I0506 20:12:54.671368 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-shtlg pod in namespace openshift-dns (previous: false). I0506 20:12:54.853383 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-shtlg pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-shtlg\" is waiting to start: ContainerCreating" I0506 20:12:54.853402 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-shtlg\" is waiting to start: ContainerCreating" I0506 20:12:54.853415 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-6s2qm pod in namespace openshift-dns (previous: false). I0506 20:12:55.049099 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:55.049154 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-kwwq5 pod in namespace openshift-dns (previous: false). I0506 20:12:55.252092 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:55.252133 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-lzg56 pod in namespace openshift-dns (previous: false). W0506 20:12:55.430836 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0506 20:12:55.449382 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:55.449407 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-774ff9574d-4tw67 pod in namespace openshift-image-registry (previous: false). I0506 20:12:55.648759 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-774ff9574d-4tw67 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-774ff9574d-4tw67\" is waiting to start: ContainerCreating" I0506 20:12:55.648779 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-774ff9574d-4tw67\" is waiting to start: ContainerCreating" I0506 20:12:55.648790 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-774ff9574d-dtvjq pod in namespace openshift-image-registry (previous: false). I0506 20:12:55.849414 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-774ff9574d-dtvjq pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-774ff9574d-dtvjq\" is waiting to start: ContainerCreating" I0506 20:12:55.849437 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-774ff9574d-dtvjq\" is waiting to start: ContainerCreating" I0506 20:12:55.849449 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-78bf9c6b75-4pfft pod in namespace openshift-image-registry (previous: false). I0506 20:12:56.065994 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-78bf9c6b75-4pfft pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-78bf9c6b75-4pfft\" is waiting to start: ContainerCreating" I0506 20:12:56.066017 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-78bf9c6b75-4pfft\" is waiting to start: ContainerCreating" I0506 20:12:56.066030 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-bbhz4 pod in namespace openshift-image-registry (previous: false). I0506 20:12:56.250317 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:56.250337 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-f9gzs pod in namespace openshift-image-registry (previous: false). W0506 20:12:56.431314 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0506 20:12:56.467105 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:56.467140 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-r68cj pod in namespace openshift-image-registry (previous: false). I0506 20:12:56.667373 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0506 20:12:56.667396 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6bbc46d58b-xwktx pod in namespace openshift-ingress (previous: false). I0506 20:12:56.860309 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6bbc46d58b-xwktx pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6bbc46d58b-xwktx\" is waiting to start: ContainerCreating" I0506 20:12:56.860329 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6bbc46d58b-xwktx\" is waiting to start: ContainerCreating" I0506 20:12:56.860341 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6bbc46d58b-zzv82 pod in namespace openshift-ingress (previous: false). I0506 20:12:57.050624 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6bbc46d58b-zzv82 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6bbc46d58b-zzv82\" is waiting to start: ContainerCreating" I0506 20:12:57.050644 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6bbc46d58b-zzv82\" is waiting to start: ContainerCreating" I0506 20:12:57.050656 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-b46c46fff-m9b7n pod in namespace openshift-ingress (previous: false). I0506 20:12:57.249785 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-b46c46fff-m9b7n pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-b46c46fff-m9b7n\" is waiting to start: ContainerCreating" I0506 20:12:57.249806 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-b46c46fff-m9b7n\" is waiting to start: ContainerCreating" I0506 20:12:57.249817 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-948pm pod in namespace openshift-ingress-canary (previous: false). W0506 20:12:57.427331 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0506 20:12:57.427361 1 tasks_processing.go:74] worker 8 stopped. E0506 20:12:57.427374 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0506 20:12:57.427385 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0506 20:12:57.427400 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0506 20:12:57.427409 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.035602142s to process 1 records I0506 20:12:57.449323 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-948pm pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-948pm\" is waiting to start: ContainerCreating" I0506 20:12:57.449341 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-948pm\" is waiting to start: ContainerCreating" I0506 20:12:57.449352 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-ck5k6 pod in namespace openshift-ingress-canary (previous: false). I0506 20:12:57.651223 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-ck5k6 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-ck5k6\" is waiting to start: ContainerCreating" I0506 20:12:57.651243 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-ck5k6\" is waiting to start: ContainerCreating" I0506 20:12:57.651258 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-z7wcf pod in namespace openshift-ingress-canary (previous: false). I0506 20:12:57.857073 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-z7wcf pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-z7wcf\" is waiting to start: ContainerCreating" I0506 20:12:57.857096 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-z7wcf\" is waiting to start: ContainerCreating" I0506 20:12:57.857109 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for migrator container migrator-7d5f866c57-94jr4 pod in namespace openshift-kube-storage-version-migrator (previous: false). I0506 20:12:58.049699 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for graceful-termination container migrator-7d5f866c57-94jr4 pod in namespace openshift-kube-storage-version-migrator (previous: false). I0506 20:12:58.249177 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-74848b4cb9-lmmft pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0506 20:12:58.452693 1 tasks_processing.go:74] worker 0 stopped. I0506 20:12:58.452810 1 recorder.go:75] Recording events/openshift-dns with fingerprint=f573114315302a27269f4f946041af53a0493792b163b1f58dc6104a05a49e01 I0506 20:12:58.452908 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=95778b2c432a0c6b052be2df53eab9a26686d566f1cdac132a2c1840183574ba I0506 20:12:58.452938 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=3ca30f3722392e11ad50c11115d920e1e131783c165ef27f89628843c5311540 I0506 20:12:58.452990 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=6f918c4945f60d61786bd545a2aa647f842e3f2bd7dad2a77a817e02452125d4 I0506 20:12:58.453007 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=9331b56c9d1d1b77e73af0c0806f8a77ce53a83b8f6eac60d5108ad7faafb281 I0506 20:12:58.453020 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=c9ea61e1cc6ebf6291221530ae064766cff02ee5ff8ad35591f5f007d1a34ddb I0506 20:12:58.453082 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=477a9cef21c353c186acd44ddf92268b31c23b8f337fcae1734318d5cca81768 I0506 20:12:58.453098 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-94jr4/migrator_current.log with fingerprint=936c5862265e7693fbac308bad004718b8b4f760d64aec4963cd3ad6afcf1ebe I0506 20:12:58.453103 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-94jr4/graceful-termination_current.log with fingerprint=85e7231955c33fec523931033565464e4c251896c09a25e5bebc6ac2d366cf9e I0506 20:12:58.453203 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-74848b4cb9-lmmft/kube-storage-version-migrator-operator_current.log with fingerprint=9e81869e113a34de2bfdc28ae5ed5b7f42c7c81b506de0fa9840f8620c585039 I0506 20:12:58.453213 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 6.061379138s to process 10 records I0506 20:13:05.042586 1 tasks_processing.go:74] worker 50 stopped. I0506 20:13:05.042632 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0506 20:13:05.042651 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.649296045s to process 1 records I0506 20:13:05.150849 1 configmapobserver.go:84] configmaps "insights-config" not found I0506 20:13:05.800364 1 tasks_processing.go:74] worker 39 stopped. I0506 20:13:05.800640 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=8134d5e41c8b88cd9e4cb52a44a4aaa7dc9c85df1ecff9db5f654f5b009336ee I0506 20:13:05.800658 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.40807742s to process 1 records E0506 20:13:05.800717 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.409s with: function \"pod_network_connectivity_checks\" failed with an error, function \"machines\" failed with an error, function \"support_secret\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0506 20:13:05.801839 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0506 20:13:05.801858 1 periodic.go:209] Running workloads gatherer I0506 20:13:05.801877 1 tasks_processing.go:45] number of workers: 2 I0506 20:13:05.801891 1 tasks_processing.go:69] worker 1 listening for tasks. I0506 20:13:05.801897 1 tasks_processing.go:71] worker 1 working on workload_info task. I0506 20:13:05.801903 1 tasks_processing.go:69] worker 0 listening for tasks. I0506 20:13:05.801914 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0506 20:13:05.829987 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0506 20:13:05.848182 1 gather_workloads_info.go:387] No image sha256:80748ba08e1c264a8c105e7f607eff386a66378e024443a844993ee9292858c1 (19ms) I0506 20:13:05.855810 1 tasks_processing.go:74] worker 0 stopped. I0506 20:13:05.855833 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 53.875313ms to process 0 records I0506 20:13:05.858680 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (10ms) I0506 20:13:05.869789 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (11ms) I0506 20:13:05.879685 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (10ms) I0506 20:13:05.891733 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (12ms) I0506 20:13:05.902607 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (11ms) I0506 20:13:05.914098 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (11ms) I0506 20:13:05.931975 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (18ms) I0506 20:13:05.943806 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (12ms) I0506 20:13:05.955378 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (12ms) I0506 20:13:05.966665 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (11ms) I0506 20:13:06.041734 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (75ms) I0506 20:13:06.141974 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (100ms) I0506 20:13:06.240676 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (99ms) I0506 20:13:06.342719 1 gather_workloads_info.go:387] No image sha256:50197f22710766515f67944a779e00dd9ae3d17b18732d7324a970353b11f292 (102ms) I0506 20:13:06.440585 1 gather_workloads_info.go:387] No image sha256:ae7d3453fd734ecc865e5f9bb16f29244ebffe6291b77e1d4e496f71eb053174 (98ms) I0506 20:13:06.544503 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (104ms) I0506 20:13:06.646226 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (102ms) I0506 20:13:06.743817 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (98ms) I0506 20:13:06.840911 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (97ms) I0506 20:13:06.941659 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (101ms) I0506 20:13:07.041375 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (100ms) I0506 20:13:07.142873 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (101ms) I0506 20:13:07.241225 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (98ms) I0506 20:13:07.347893 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (107ms) I0506 20:13:07.442170 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (94ms) I0506 20:13:07.543868 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (102ms) I0506 20:13:07.643101 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (99ms) I0506 20:13:07.746033 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (103ms) I0506 20:13:07.841904 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (96ms) I0506 20:13:07.941927 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (100ms) I0506 20:13:08.041183 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (99ms) I0506 20:13:08.140559 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (99ms) I0506 20:13:08.241905 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (101ms) I0506 20:13:08.241938 1 tasks_processing.go:74] worker 1 stopped. E0506 20:13:08.241948 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0506 20:13:08.242210 1 recorder.go:75] Recording config/workload_info with fingerprint=5dab5740b0c9cfc2a23d5129d54da01be5398922f8c6fa8144d2b38d5fcec5b4 I0506 20:13:08.242224 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.440032316s to process 1 records E0506 20:13:08.242248 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.44s with: function \"workload_info\" failed with an error" I0506 20:13:08.243350 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0506 20:13:08.243365 1 periodic.go:209] Running conditional gatherer I0506 20:13:08.251614 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0506 20:13:08.258245 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:52639->172.30.0.10:53: read: connection refused E0506 20:13:08.258516 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0506 20:13:08.258583 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0506 20:13:08.266262 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0506 20:13:08.266283 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266290 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266293 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266296 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266299 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266302 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266305 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266308 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0506 20:13:08.266311 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0506 20:13:08.266330 1 tasks_processing.go:45] number of workers: 3 I0506 20:13:08.266341 1 tasks_processing.go:69] worker 2 listening for tasks. I0506 20:13:08.266345 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0506 20:13:08.266353 1 tasks_processing.go:69] worker 0 listening for tasks. I0506 20:13:08.266364 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0506 20:13:08.266371 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0506 20:13:08.266371 1 tasks_processing.go:69] worker 1 listening for tasks. I0506 20:13:08.266381 1 tasks_processing.go:74] worker 1 stopped. I0506 20:13:08.266451 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0506 20:13:08.266464 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.067µs to process 1 records I0506 20:13:08.266496 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0506 20:13:08.266503 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.358µs to process 1 records I0506 20:13:08.266509 1 tasks_processing.go:74] worker 0 stopped. I0506 20:13:08.266634 1 tasks_processing.go:74] worker 2 stopped. I0506 20:13:08.266646 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 257.046µs to process 0 records I0506 20:13:08.266674 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:52639->172.30.0.10:53: read: connection refused I0506 20:13:08.266692 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0506 20:13:08.291809 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=a22c01b2811b7f2916285c39ce0b8c6af19647d411258bef8fb625eac24138c8 I0506 20:13:08.291970 1 diskrecorder.go:70] Writing 103 records to /var/lib/insights-operator/insights-2026-05-06-201308.tar.gz I0506 20:13:08.298497 1 diskrecorder.go:51] Wrote 103 records to disk in 6ms I0506 20:13:08.298533 1 periodic.go:278] Gathering cluster info every 2h0m0s I0506 20:13:08.298547 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0506 20:13:08.329009 1 configmapobserver.go:84] configmaps "insights-config" not found I0506 20:13:08.526277 1 configmapobserver.go:84] configmaps "insights-config" not found I0506 20:13:17.863385 1 configmapobserver.go:84] configmaps "insights-config" not found I0506 20:14:21.973087 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="c7e10ac18a7e20dcb2c763f00a9c9b5a82d20bf7ed388c9cffccf67834d1ce70") W0506 20:14:21.973151 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0506 20:14:21.973200 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0506 20:14:21.973205 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="1907ee85293ecdb60da664fdea339f80c76df3a9a0226f0b2882e007d3ed75ce") I0506 20:14:21.973235 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0506 20:14:21.973277 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0506 20:14:21.973285 1 base_controller.go:181] Shutting down LoggingSyncer ... I0506 20:14:21.973296 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="8aafa119520e121980eeb331b4abf7d46635d389767d795eac791a2ce365c5c8") I0506 20:14:21.973307 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0506 20:14:21.973307 1 genericapiserver.go:651] "[graceful-termination] not going to wait for active watch request(s) to drain" I0506 20:14:21.973316 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0506 20:14:21.973322 1 periodic.go:170] Shutting down I0506 20:14:21.973330 1 base_controller.go:181] Shutting down ConfigController ... I0506 20:14:21.973328 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0506 20:14:21.973342 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0506 20:14:21.973346 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0506 20:14:21.973348 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0506 20:14:21.973346 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController" I0506 20:14:21.973359 1 base_controller.go:113] All ConfigController workers have been terminated I0506 20:14:21.973369 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController I0506 20:14:21.973392 1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/tmp/serving-cert-606839391/tls.crt::/tmp/serving-cert-606839391/tls.key"