W0216 11:14:26.545534 1 cmd.go:245] Using insecure, self-signed certificates I0216 11:14:26.964756 1 start.go:223] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:26.966189 1 observer_polling.go:159] Starting file observer I0216 11:14:27.470052 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0216 11:14:27.470250 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0216 11:14:27.470540 1 secure_serving.go:57] Forcing use of http/1.1 only W0216 11:14:27.470562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0216 11:14:27.470571 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0216 11:14:27.470575 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0216 11:14:27.470579 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0216 11:14:27.470582 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0216 11:14:27.470585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0216 11:14:27.470581 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0216 11:14:27.473747 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"c0bbf315-d849-4f72-86c6-bf775987c592", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallPowerVS", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "ExternalOIDC", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "GCPClusterHostedDNS", "GatewayAPI", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesSupport", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} I0216 11:14:27.473763 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BootcNodeManagement BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere ClusterMonitoringConfig DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation MultiArchInstallAWS MultiArchInstallAzure MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NodeDisruptionPolicy NodeSwap OVNObservability OnClusterBuild OpenShiftPodSecurityAdmission PersistentIPsForVirtualization PinnedImages PlatformOperators PrivateHostedZoneAWS ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SignatureStores SigstoreImageVerification StreamingCollectionEncodingToJSON StreamingCollectionEncodingToProtobuf TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesSupport VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] I0216 11:14:27.476166 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0216 11:14:27.476174 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0216 11:14:27.476186 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0216 11:14:27.476196 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0216 11:14:27.476189 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 11:14:27.476180 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController I0216 11:14:27.476412 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1982355568/tls.crt::/tmp/serving-cert-1982355568/tls.key" I0216 11:14:27.476478 1 secure_serving.go:213] Serving securely on [::]:8443 I0216 11:14:27.476492 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0216 11:14:27.479096 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0216 11:14:27.479122 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 11:14:27.479151 1 base_controller.go:67] Waiting for caches to sync for ConfigController I0216 11:14:27.484504 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 11:14:27.484525 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 11:14:27.489243 1 secretconfigobserver.go:119] support secret does not exist I0216 11:14:27.493997 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 11:14:27.498598 1 secretconfigobserver.go:119] support secret does not exist I0216 11:14:27.502367 1 recorder.go:161] Pruning old reports every 4h24m25s, max age is 288h0m0s I0216 11:14:27.507777 1 periodic.go:214] Running clusterconfig gatherer I0216 11:14:27.507777 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0216 11:14:27.507806 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0216 11:14:27.507816 1 tasks_processing.go:45] number of workers: 64 I0216 11:14:27.507839 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0216 11:14:27.507842 1 tasks_processing.go:69] worker 7 listening for tasks. I0216 11:14:27.507845 1 insightsreport.go:296] Starting report retriever I0216 11:14:27.507850 1 tasks_processing.go:69] worker 4 listening for tasks. I0216 11:14:27.507851 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0216 11:14:27.507855 1 tasks_processing.go:69] worker 5 listening for tasks. I0216 11:14:27.507859 1 tasks_processing.go:69] worker 6 listening for tasks. I0216 11:14:27.507859 1 tasks_processing.go:69] worker 3 listening for tasks. I0216 11:14:27.507863 1 tasks_processing.go:71] worker 6 working on overlapping_namespace_uids task. I0216 11:14:27.507861 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 11:14:27.507867 1 tasks_processing.go:71] worker 3 working on openstack_dataplanedeployments task. I0216 11:14:27.507863 1 tasks_processing.go:69] worker 36 listening for tasks. I0216 11:14:27.507868 1 tasks_processing.go:69] worker 8 listening for tasks. I0216 11:14:27.507875 1 tasks_processing.go:69] worker 9 listening for tasks. I0216 11:14:27.507874 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 11:14:27.507879 1 tasks_processing.go:69] worker 51 listening for tasks. I0216 11:14:27.507885 1 tasks_processing.go:69] worker 44 listening for tasks. I0216 11:14:27.507879 1 tasks_processing.go:69] worker 10 listening for tasks. I0216 11:14:27.507887 1 tasks_processing.go:69] worker 12 listening for tasks. I0216 11:14:27.507883 1 tasks_processing.go:69] worker 11 listening for tasks. I0216 11:14:27.507892 1 tasks_processing.go:69] worker 48 listening for tasks. I0216 11:14:27.507895 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 11:14:27.507895 1 tasks_processing.go:69] worker 13 listening for tasks. I0216 11:14:27.507899 1 tasks_processing.go:69] worker 14 listening for tasks. I0216 11:14:27.507899 1 tasks_processing.go:69] worker 49 listening for tasks. I0216 11:14:27.507905 1 tasks_processing.go:69] worker 26 listening for tasks. I0216 11:14:27.507905 1 tasks_processing.go:69] worker 25 listening for tasks. I0216 11:14:27.507906 1 tasks_processing.go:69] worker 50 listening for tasks. I0216 11:14:27.507909 1 tasks_processing.go:69] worker 27 listening for tasks. I0216 11:14:27.507906 1 tasks_processing.go:69] worker 47 listening for tasks. I0216 11:14:27.507913 1 tasks_processing.go:69] worker 15 listening for tasks. I0216 11:14:27.507915 1 tasks_processing.go:69] worker 32 listening for tasks. I0216 11:14:27.507903 1 tasks_processing.go:69] worker 43 listening for tasks. I0216 11:14:27.507915 1 tasks_processing.go:69] worker 31 listening for tasks. I0216 11:14:27.507918 1 tasks_processing.go:69] worker 37 listening for tasks. I0216 11:14:27.507921 1 tasks_processing.go:69] worker 28 listening for tasks. I0216 11:14:27.507927 1 tasks_processing.go:69] worker 29 listening for tasks. I0216 11:14:27.507929 1 tasks_processing.go:69] worker 45 listening for tasks. I0216 11:14:27.507932 1 tasks_processing.go:69] worker 35 listening for tasks. I0216 11:14:27.507926 1 tasks_processing.go:69] worker 59 listening for tasks. I0216 11:14:27.507937 1 tasks_processing.go:69] worker 46 listening for tasks. I0216 11:14:27.507939 1 tasks_processing.go:69] worker 30 listening for tasks. I0216 11:14:27.507940 1 tasks_processing.go:69] worker 38 listening for tasks. I0216 11:14:27.507922 1 tasks_processing.go:69] worker 33 listening for tasks. I0216 11:14:27.507937 1 tasks_processing.go:69] worker 41 listening for tasks. I0216 11:14:27.507938 1 tasks_processing.go:69] worker 40 listening for tasks. I0216 11:14:27.507944 1 tasks_processing.go:69] worker 42 listening for tasks. I0216 11:14:27.507949 1 tasks_processing.go:69] worker 55 listening for tasks. I0216 11:14:27.507937 1 tasks_processing.go:69] worker 17 listening for tasks. I0216 11:14:27.507952 1 tasks_processing.go:69] worker 22 listening for tasks. I0216 11:14:27.507951 1 tasks_processing.go:69] worker 19 listening for tasks. I0216 11:14:27.507954 1 tasks_processing.go:69] worker 53 listening for tasks. I0216 11:14:27.507954 1 tasks_processing.go:69] worker 56 listening for tasks. I0216 11:14:27.507956 1 tasks_processing.go:69] worker 54 listening for tasks. I0216 11:14:27.507959 1 tasks_processing.go:69] worker 20 listening for tasks. I0216 11:14:27.507961 1 tasks_processing.go:69] worker 58 listening for tasks. I0216 11:14:27.507928 1 tasks_processing.go:69] worker 34 listening for tasks. I0216 11:14:27.507965 1 tasks_processing.go:69] worker 62 listening for tasks. I0216 11:14:27.507964 1 tasks_processing.go:69] worker 61 listening for tasks. I0216 11:14:27.507961 1 tasks_processing.go:69] worker 39 listening for tasks. I0216 11:14:27.507969 1 tasks_processing.go:69] worker 60 listening for tasks. I0216 11:14:27.507969 1 tasks_processing.go:69] worker 57 listening for tasks. I0216 11:14:27.507929 1 tasks_processing.go:69] worker 16 listening for tasks. I0216 11:14:27.507946 1 tasks_processing.go:69] worker 21 listening for tasks. I0216 11:14:27.507946 1 tasks_processing.go:69] worker 18 listening for tasks. I0216 11:14:27.507947 1 tasks_processing.go:69] worker 52 listening for tasks. I0216 11:14:27.507948 1 tasks_processing.go:69] worker 23 listening for tasks. I0216 11:14:27.507954 1 tasks_processing.go:69] worker 24 listening for tasks. I0216 11:14:27.507960 1 tasks_processing.go:69] worker 63 listening for tasks. I0216 11:14:27.507963 1 tasks_processing.go:71] worker 7 working on node_logs task. I0216 11:14:27.508033 1 tasks_processing.go:71] worker 0 working on machine_healthchecks task. I0216 11:14:27.508039 1 tasks_processing.go:71] worker 1 working on ingress_certificates task. I0216 11:14:27.508042 1 tasks_processing.go:71] worker 44 working on pod_network_connectivity_checks task. I0216 11:14:27.508046 1 tasks_processing.go:71] worker 30 working on tsdb_status task. I0216 11:14:27.508043 1 tasks_processing.go:71] worker 59 working on dvo_metrics task. W0216 11:14:27.508068 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:27.508076 1 tasks_processing.go:71] worker 10 working on config_maps task. I0216 11:14:27.508081 1 tasks_processing.go:71] worker 38 working on operators_pods_and_events task. I0216 11:14:27.507962 1 tasks_processing.go:71] worker 4 working on openshift_apiserver_operator_logs task. I0216 11:14:27.508118 1 tasks_processing.go:71] worker 32 working on sap_datahubs task. I0216 11:14:27.508127 1 tasks_processing.go:71] worker 33 working on cluster_apiserver task. I0216 11:14:27.508131 1 tasks_processing.go:71] worker 2 working on operators task. I0216 11:14:27.508148 1 tasks_processing.go:71] worker 12 working on machine_autoscalers task. I0216 11:14:27.508163 1 tasks_processing.go:71] worker 8 working on active_alerts task. I0216 11:14:27.508230 1 tasks_processing.go:71] worker 49 working on storage_classes task. I0216 11:14:27.508255 1 tasks_processing.go:71] worker 57 working on image_pruners task. I0216 11:14:27.508072 1 tasks_processing.go:71] worker 11 working on kube_controller_manager_logs task. I0216 11:14:27.508035 1 tasks_processing.go:71] worker 36 working on scheduler_logs task. I0216 11:14:27.508079 1 tasks_processing.go:71] worker 30 working on storage_cluster task. I0216 11:14:27.508251 1 tasks_processing.go:71] worker 15 working on pdbs task. I0216 11:14:27.508039 1 tasks_processing.go:71] worker 51 working on proxies task. W0216 11:14:27.508612 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:27.508042 1 tasks_processing.go:71] worker 46 working on metrics task. W0216 11:14:27.508678 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:27.508176 1 tasks_processing.go:71] worker 28 working on container_runtime_configs task. I0216 11:14:27.508035 1 tasks_processing.go:71] worker 14 working on olm_operators task. I0216 11:14:27.508039 1 tasks_processing.go:71] worker 63 working on sap_license_management_logs task. I0216 11:14:27.508188 1 tasks_processing.go:71] worker 37 working on sap_pods task. I0216 11:14:27.508195 1 tasks_processing.go:71] worker 13 working on container_images task. I0216 11:14:27.508201 1 tasks_processing.go:71] worker 34 working on version task. I0216 11:14:27.508191 1 tasks_processing.go:71] worker 31 working on nodenetworkconfigurationpolicies task. I0216 11:14:27.508205 1 tasks_processing.go:71] worker 41 working on openshift_authentication_logs task. I0216 11:14:27.508209 1 tasks_processing.go:71] worker 40 working on machine_sets task. I0216 11:14:27.508209 1 tasks_processing.go:71] worker 45 working on machines task. I0216 11:14:27.508212 1 tasks_processing.go:71] worker 42 working on support_secret task. I0216 11:14:27.508205 1 tasks_processing.go:71] worker 43 working on schedulers task. I0216 11:14:27.508213 1 tasks_processing.go:71] worker 29 working on openshift_logging task. I0216 11:14:27.508216 1 tasks_processing.go:71] worker 55 working on networks task. I0216 11:14:27.508219 1 tasks_processing.go:71] worker 53 working on qemu_kubevirt_launcher_logs task. I0216 11:14:27.508219 1 tasks_processing.go:71] worker 56 working on nodenetworkstates task. I0216 11:14:27.508219 1 tasks_processing.go:71] worker 48 working on authentication task. I0216 11:14:27.508223 1 tasks_processing.go:71] worker 17 working on certificate_signing_requests task. I0216 11:14:27.508223 1 tasks_processing.go:71] worker 54 working on image_registries task. I0216 11:14:27.508226 1 tasks_processing.go:71] worker 22 working on machine_config_pools task. I0216 11:14:27.508226 1 tasks_processing.go:71] worker 26 working on lokistack task. I0216 11:14:27.508227 1 tasks_processing.go:71] worker 20 working on oauths task. I0216 11:14:27.508229 1 tasks_processing.go:71] worker 19 working on install_plans task. I0216 11:14:27.508231 1 tasks_processing.go:71] worker 58 working on silenced_alerts task. I0216 11:14:27.508235 1 tasks_processing.go:71] worker 47 working on cost_management_metrics_configs task. W0216 11:14:27.510538 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:27.508236 1 tasks_processing.go:71] worker 62 working on openshift_machine_api_events task. I0216 11:14:27.508237 1 tasks_processing.go:71] worker 35 working on nodes task. I0216 11:14:27.508239 1 tasks_processing.go:71] worker 27 working on aggregated_monitoring_cr_names task. I0216 11:14:27.508240 1 tasks_processing.go:71] worker 61 working on service_accounts task. I0216 11:14:27.508243 1 tasks_processing.go:71] worker 18 working on openstack_version task. I0216 11:14:27.508244 1 tasks_processing.go:71] worker 52 working on openstack_controlplanes task. I0216 11:14:27.508245 1 tasks_processing.go:71] worker 25 working on machine_configs task. I0216 11:14:27.508247 1 tasks_processing.go:71] worker 39 working on crds task. I0216 11:14:27.508248 1 tasks_processing.go:71] worker 23 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0216 11:14:27.508251 1 tasks_processing.go:71] worker 60 working on feature_gates task. I0216 11:14:27.508253 1 tasks_processing.go:71] worker 24 working on sap_config task. I0216 11:14:27.508213 1 tasks_processing.go:71] worker 50 working on jaegers task. I0216 11:14:27.508261 1 gather.go:180] gatherer "clusterconfig" function "tsdb_status" took 28.489µs to process 0 records I0216 11:14:27.511489 1 gather.go:180] gatherer "clusterconfig" function "active_alerts" took 390.045µs to process 0 records I0216 11:14:27.511498 1 gather.go:180] gatherer "clusterconfig" function "metrics" took 18.579µs to process 0 records I0216 11:14:27.511503 1 gather.go:180] gatherer "clusterconfig" function "silenced_alerts" took 28.014µs to process 0 records E0216 11:14:27.511507 1 gather.go:143] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0216 11:14:27.511511 1 gather.go:180] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 3.068803ms to process 0 records I0216 11:14:27.511516 1 gather.go:180] gatherer "clusterconfig" function "sap_datahubs" took 3.005703ms to process 0 records I0216 11:14:27.511519 1 gather.go:180] gatherer "clusterconfig" function "machine_autoscalers" took 3.228412ms to process 0 records I0216 11:14:27.508262 1 tasks_processing.go:71] worker 9 working on clusterroles task. I0216 11:14:27.511526 1 tasks_processing.go:71] worker 46 working on validating_webhook_configurations task. I0216 11:14:27.511541 1 tasks_processing.go:71] worker 8 working on image task. I0216 11:14:27.511571 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 3.695008ms to process 0 records I0216 11:14:27.511613 1 tasks_processing.go:74] worker 32 stopped. E0216 11:14:27.511612 1 gather.go:143] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0216 11:14:27.511620 1 tasks_processing.go:71] worker 44 working on ingress task. I0216 11:14:27.511626 1 gather.go:180] gatherer "clusterconfig" function "machine_healthchecks" took 3.567516ms to process 0 records I0216 11:14:27.508269 1 tasks_processing.go:71] worker 21 working on infrastructures task. I0216 11:14:27.511642 1 tasks_processing.go:74] worker 0 stopped. I0216 11:14:27.508264 1 tasks_processing.go:71] worker 16 working on monitoring_persistent_volumes task. I0216 11:14:27.511653 1 tasks_processing.go:74] worker 3 stopped. I0216 11:14:27.511523 1 tasks_processing.go:71] worker 12 working on mutating_webhook_configurations task. I0216 11:14:27.507962 1 tasks_processing.go:71] worker 5 working on ceph_cluster task. I0216 11:14:27.511648 1 tasks_processing.go:71] worker 58 working on openstack_dataplanenodesets task. I0216 11:14:27.516148 1 tasks_processing.go:74] worker 30 stopped. I0216 11:14:27.516159 1 gather.go:180] gatherer "clusterconfig" function "storage_cluster" took 7.686567ms to process 0 records I0216 11:14:27.516169 1 tasks_processing.go:74] worker 31 stopped. I0216 11:14:27.516175 1 gather_sap_vsystem_iptables_logs.go:60] SAP resources weren't found I0216 11:14:27.516185 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 7.067686ms to process 0 records I0216 11:14:27.516202 1 gather.go:180] gatherer "clusterconfig" function "sap_license_management_logs" took 7.348273ms to process 0 records I0216 11:14:27.516208 1 tasks_processing.go:74] worker 63 stopped. I0216 11:14:27.516210 1 gather.go:180] gatherer "clusterconfig" function "container_runtime_configs" took 7.4937ms to process 0 records I0216 11:14:27.516216 1 tasks_processing.go:74] worker 28 stopped. E0216 11:14:27.516219 1 gather.go:143] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0216 11:14:27.516226 1 gather.go:180] gatherer "clusterconfig" function "machines" took 7.002245ms to process 0 records I0216 11:14:27.516230 1 tasks_processing.go:74] worker 45 stopped. I0216 11:14:27.520594 1 tasks_processing.go:74] worker 40 stopped. I0216 11:14:27.520605 1 gather.go:180] gatherer "clusterconfig" function "machine_sets" took 11.399403ms to process 0 records I0216 11:14:27.520777 1 tasks_processing.go:74] worker 29 stopped. I0216 11:14:27.520786 1 gather.go:180] gatherer "clusterconfig" function "openshift_logging" took 11.185324ms to process 0 records I0216 11:14:27.520823 1 tasks_processing.go:74] worker 56 stopped. I0216 11:14:27.520832 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkstates" took 11.048334ms to process 0 records I0216 11:14:27.520839 1 gather.go:180] gatherer "clusterconfig" function "cost_management_metrics_configs" took 10.291959ms to process 0 records I0216 11:14:27.520842 1 gather.go:180] gatherer "clusterconfig" function "node_logs" took 12.83652ms to process 0 records I0216 11:14:27.520847 1 tasks_processing.go:74] worker 7 stopped. I0216 11:14:27.520850 1 tasks_processing.go:74] worker 47 stopped. I0216 11:14:27.520861 1 tasks_processing.go:74] worker 26 stopped. I0216 11:14:27.520864 1 gather.go:180] gatherer "clusterconfig" function "lokistack" took 10.635046ms to process 0 records I0216 11:14:27.520872 1 tasks_processing.go:74] worker 37 stopped. I0216 11:14:27.520875 1 gather.go:180] gatherer "clusterconfig" function "sap_pods" took 11.959626ms to process 0 records I0216 11:14:27.521009 1 tasks_processing.go:74] worker 33 stopped. I0216 11:14:27.521177 1 recorder.go:75] Recording config/apiserver with fingerprint=c78fc5f1397a0ada76bb1a9c24b86241b91e122e4ca2eaf1819f875e48162e4d I0216 11:14:27.521194 1 gather.go:180] gatherer "clusterconfig" function "cluster_apiserver" took 12.873185ms to process 1 records I0216 11:14:27.522427 1 tasks_processing.go:74] worker 57 stopped. I0216 11:14:27.522634 1 gather_logs.go:145] no pods in openshift-apiserver-operator namespace were found I0216 11:14:27.522640 1 gather_logs.go:145] no pods in openshift-kube-scheduler namespace were found I0216 11:14:27.522658 1 gather_logs.go:145] no pods in openshift-authentication namespace were found I0216 11:14:27.522660 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=64a98d26de28288c57ddc30d3d4955818c80252734eb12f4da86fb4dacda97de I0216 11:14:27.522668 1 gather.go:180] gatherer "clusterconfig" function "image_pruners" took 14.123595ms to process 1 records I0216 11:14:27.522675 1 gather.go:180] gatherer "clusterconfig" function "openshift_apiserver_operator_logs" took 14.539328ms to process 0 records I0216 11:14:27.522681 1 gather.go:180] gatherer "clusterconfig" function "scheduler_logs" took 14.227817ms to process 0 records I0216 11:14:27.522686 1 gather.go:180] gatherer "clusterconfig" function "openshift_authentication_logs" took 13.48295ms to process 0 records I0216 11:14:27.522690 1 tasks_processing.go:74] worker 41 stopped. I0216 11:14:27.522693 1 tasks_processing.go:74] worker 4 stopped. I0216 11:14:27.522696 1 tasks_processing.go:74] worker 36 stopped. I0216 11:14:27.522825 1 tasks_processing.go:74] worker 51 stopped. I0216 11:14:27.522883 1 recorder.go:75] Recording config/proxy with fingerprint=a2ff68d944a0a962fd7dfe69ff2531f257831bd6c731b427825bdbaff7df45a0 I0216 11:14:27.522893 1 gather.go:180] gatherer "clusterconfig" function "proxies" took 14.241022ms to process 1 records I0216 11:14:27.523034 1 tasks_processing.go:74] worker 6 stopped. I0216 11:14:27.523058 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0216 11:14:27.523071 1 gather.go:180] gatherer "clusterconfig" function "overlapping_namespace_uids" took 15.158847ms to process 1 records I0216 11:14:27.527876 1 tasks_processing.go:74] worker 14 stopped. I0216 11:14:27.527888 1 gather.go:180] gatherer "clusterconfig" function "olm_operators" took 19.135204ms to process 0 records I0216 11:14:27.527906 1 tasks_processing.go:74] worker 48 stopped. I0216 11:14:27.528077 1 recorder.go:75] Recording config/authentication with fingerprint=87b797ed07ecc7a27f7fa882b9afd655b7fc0d932d677b61dd0a66ebd52f814d I0216 11:14:27.528089 1 gather.go:180] gatherer "clusterconfig" function "authentication" took 18.026802ms to process 1 records I0216 11:14:27.528147 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=bbdc2ce317439154e526d2a8c8405647e94a3baaa3128f1bd5873fce52b6afaa I0216 11:14:27.528169 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=0a5caa4c76cb64f4fe3352414df396cebf8620d5127cc40395e6099518aabb5f I0216 11:14:27.528170 1 tasks_processing.go:74] worker 15 stopped. I0216 11:14:27.528182 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=e07672a7e6a0cb8c6eabbd7218cfcc5b6c5264fdbae59e15402ad438a39a148e I0216 11:14:27.528188 1 gather.go:180] gatherer "clusterconfig" function "pdbs" took 19.44697ms to process 3 records I0216 11:14:27.534061 1 tasks_processing.go:74] worker 22 stopped. I0216 11:14:27.534074 1 gather.go:180] gatherer "clusterconfig" function "machine_config_pools" took 23.943848ms to process 0 records I0216 11:14:27.534082 1 gather.go:180] gatherer "clusterconfig" function "openstack_version" took 23.255805ms to process 0 records I0216 11:14:27.534087 1 tasks_processing.go:74] worker 18 stopped. I0216 11:14:27.534092 1 tasks_processing.go:74] worker 5 stopped. I0216 11:14:27.534099 1 gather.go:180] gatherer "clusterconfig" function "ceph_cluster" took 22.355556ms to process 0 records I0216 11:14:27.534104 1 gather.go:180] gatherer "clusterconfig" function "jaegers" took 22.702716ms to process 0 records I0216 11:14:27.534108 1 tasks_processing.go:74] worker 50 stopped. I0216 11:14:27.534111 1 tasks_processing.go:74] worker 58 stopped. I0216 11:14:27.534119 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 22.315245ms to process 0 records I0216 11:14:27.534127 1 gather.go:180] gatherer "clusterconfig" function "sap_config" took 22.743059ms to process 0 records I0216 11:14:27.534132 1 tasks_processing.go:74] worker 24 stopped. I0216 11:14:27.534416 1 tasks_processing.go:74] worker 20 stopped. I0216 11:14:27.534608 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0216 11:14:27.534629 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 11:14:27.534638 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 11:14:27.534642 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 11:14:27.534645 1 recorder.go:75] Recording config/oauth with fingerprint=f528bf01acf8b8c1fe17ff85ba98fbb5390133c828c1fa5e8c20e76267b7d2b8 I0216 11:14:27.534657 1 controller.go:457] The operator is still being initialized I0216 11:14:27.534660 1 gather.go:180] gatherer "clusterconfig" function "oauths" took 24.046126ms to process 1 records I0216 11:14:27.534664 1 controller.go:482] The operator is healthy I0216 11:14:27.534731 1 tasks_processing.go:74] worker 49 stopped. I0216 11:14:27.534760 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=996e8e9ea62d9ab3f8804b5070aa318e383409fcc3436e34c7206a1d85fc5a68 I0216 11:14:27.534779 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=f8c551bb0b4c45fdfd054a8496dc48bc7221d1bc6bd6db656b3351dd8271df9f I0216 11:14:27.534789 1 gather.go:180] gatherer "clusterconfig" function "storage_classes" took 26.285936ms to process 2 records I0216 11:14:27.534796 1 gather.go:180] gatherer "clusterconfig" function "machine_configs" took 23.587789ms to process 0 records I0216 11:14:27.534801 1 gather.go:180] gatherer "clusterconfig" function "openstack_controlplanes" took 23.668442ms to process 0 records I0216 11:14:27.534815 1 tasks_processing.go:74] worker 25 stopped. I0216 11:14:27.534818 1 tasks_processing.go:74] worker 52 stopped. I0216 11:14:27.534882 1 tasks_processing.go:74] worker 55 stopped. I0216 11:14:27.534901 1 recorder.go:75] Recording config/network with fingerprint=6194ccbdbbe28ddb11ed084275cb39c51ff4e0956629f2911d204027cf467ce5 I0216 11:14:27.534912 1 gather.go:180] gatherer "clusterconfig" function "networks" took 24.965406ms to process 1 records I0216 11:14:27.534986 1 tasks_processing.go:74] worker 54 stopped. I0216 11:14:27.535216 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=85af7c4926e9f06f9d8e3180ff25b68fb00f24bf3e272b418c3192f3f4ed0062 I0216 11:14:27.535228 1 gather.go:180] gatherer "clusterconfig" function "image_registries" took 24.743081ms to process 1 records I0216 11:14:27.535312 1 tasks_processing.go:74] worker 35 stopped. I0216 11:14:27.536077 1 recorder.go:75] Recording config/node/ip-10-0-136-116.ec2.internal with fingerprint=93027a971551aa32a1c38485b440ea89be6cff157afa3c9e0a45eb56f920841d I0216 11:14:27.536256 1 recorder.go:75] Recording config/node/ip-10-0-158-1.ec2.internal with fingerprint=6781567da886cdaf1fc3a68bdadaf8a90f75322dfddadc184b81ee6f9874ae16 I0216 11:14:27.536361 1 recorder.go:75] Recording config/node/ip-10-0-170-188.ec2.internal with fingerprint=5a698761eb5d21a6ba7b1fa1cdb4da7c8f340f127dabbf814ae383929938a4ce I0216 11:14:27.536380 1 gather.go:180] gatherer "clusterconfig" function "nodes" took 24.18229ms to process 3 records I0216 11:14:27.537015 1 gather_logs.go:145] no pods in openshift-kube-controller-manager namespace were found I0216 11:14:27.537027 1 tasks_processing.go:74] worker 11 stopped. I0216 11:14:27.537033 1 gather.go:180] gatherer "clusterconfig" function "kube_controller_manager_logs" took 28.656693ms to process 0 records I0216 11:14:27.537317 1 tasks_processing.go:74] worker 60 stopped. I0216 11:14:27.537433 1 recorder.go:75] Recording config/featuregate with fingerprint=21f31d451fd03517736bd8cb67c746acfb38ebe4248afbcd9c10eb0ffbe3e9cb I0216 11:14:27.537443 1 gather.go:180] gatherer "clusterconfig" function "feature_gates" took 26.179262ms to process 1 records W0216 11:14:27.538581 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 11:14:27.545133 1 tasks_processing.go:74] worker 62 stopped. I0216 11:14:27.545156 1 gather.go:180] gatherer "clusterconfig" function "openshift_machine_api_events" took 34.570776ms to process 0 records I0216 11:14:27.545341 1 tasks_processing.go:74] worker 42 stopped. E0216 11:14:27.545385 1 gather.go:143] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0216 11:14:27.545401 1 gather.go:180] gatherer "clusterconfig" function "support_secret" took 36.118788ms to process 0 records I0216 11:14:27.545463 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=1eadc3cc8d9b1e14413d980108ca43f6667e09402fda0083d7eb211d45f760eb I0216 11:14:27.545476 1 gather.go:180] gatherer "clusterconfig" function "schedulers" took 35.931105ms to process 1 records I0216 11:14:27.545488 1 tasks_processing.go:74] worker 43 stopped. I0216 11:14:27.546647 1 tasks_processing.go:74] worker 8 stopped. I0216 11:14:27.546731 1 recorder.go:75] Recording config/image with fingerprint=1edeb8176e3f493f76384cd49bdd6197a0a0497fff9beb1a0c808f6e2421bc08 I0216 11:14:27.546742 1 gather.go:180] gatherer "clusterconfig" function "image" took 35.091405ms to process 1 records I0216 11:14:27.546748 1 gather.go:180] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 35.093582ms to process 0 records I0216 11:14:27.546762 1 tasks_processing.go:74] worker 16 stopped. I0216 11:14:27.546806 1 gather_logs.go:145] no pods in namespace were found I0216 11:14:27.546836 1 tasks_processing.go:74] worker 34 stopped. I0216 11:14:27.546900 1 recorder.go:75] Recording config/version with fingerprint=93547b9059ecfe10cbe7e4b7e5f7ead544c5cdcfc34f544dc2d8b307d62c6045 I0216 11:14:27.546908 1 recorder.go:75] Recording config/id with fingerprint=0e3aed1a993efbd09aa6f3300cac555755c7958a4703736f81c3bdc2df77c9da I0216 11:14:27.546912 1 gather.go:180] gatherer "clusterconfig" function "version" took 37.698296ms to process 2 records I0216 11:14:27.546916 1 gather.go:180] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 37.08915ms to process 0 records I0216 11:14:27.546920 1 tasks_processing.go:74] worker 53 stopped. I0216 11:14:27.546993 1 tasks_processing.go:74] worker 21 stopped. I0216 11:14:27.547660 1 recorder.go:75] Recording config/infrastructure with fingerprint=a2cd75494f90b7d995c7720c08b0af14f833319efefaf1fbe390c7eb20c11493 I0216 11:14:27.547675 1 gather.go:180] gatherer "clusterconfig" function "infrastructures" took 35.355367ms to process 1 records I0216 11:14:27.547731 1 tasks_processing.go:74] worker 46 stopped. I0216 11:14:27.547809 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=558f3c8f88d57bc74dadd10746c99cc59bc340f08f13cb131d8b272366552fce I0216 11:14:27.547848 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=5cf51e6990b00a942ac7b297de9d16910030ec6f4319f4b31ee8fc333e38545f I0216 11:14:27.547863 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=14627309481135c034feb1d8963a9e07bb22370ebfdf44254b7f27cb6bffd6eb I0216 11:14:27.547876 1 recorder.go:75] Recording config/validatingwebhookconfigurations/snapshot.storage.k8s.io with fingerprint=bf5f161235f2c8bfc80969d0c013e2c0ce600bbac7b5a65147d5649fc0a1908c I0216 11:14:27.547898 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=574fd25954bf61bb5f6d0468cd9e978064f0126bfd34cb89a4e42d48818796d1 I0216 11:14:27.547920 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=cb4e2541003b2161dc956fe15525c65a314f6a789b8bf4dd0a4f92d28dd1dc53 I0216 11:14:27.547938 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=046cc62d774402048db73555d0a21ff5d32a0d75853368d6d6c78ce396552ebd I0216 11:14:27.547961 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=513cb88a5f427fa8b83ff11903a176e23eebb9923d73119b2c5dd07923df98a5 I0216 11:14:27.547982 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=53744c6835d4794ffa8bee9901e3ab41473838aee8f9bc6f9c79fb29ad38780f I0216 11:14:27.548013 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=46f0e12a72ad7792ed8ebaaf1c881a6f4b85c9e94b8d0463bd06ca463b1b99bf I0216 11:14:27.548030 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=2508c03a8530455561ab4269cda8c1d4e4026870b52442e7ea50ff071f69381e I0216 11:14:27.548037 1 gather.go:180] gatherer "clusterconfig" function "validating_webhook_configurations" took 35.48149ms to process 11 records I0216 11:14:27.548131 1 tasks_processing.go:74] worker 12 stopped. I0216 11:14:27.548133 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=0925707cc572b9cb461829bfe7120944058be808d25b1461deac2491bf6237c3 I0216 11:14:27.548168 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=6cdd31609088ea44df23a48cfd3abd57cd7104b2e67fe194f6955b8bb7499a20 I0216 11:14:27.548186 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=93072dcd33beefaf4b9b6e52ad4bdc8ce2a0cde15cf1d8fa858a8ed822c8ffcc I0216 11:14:27.548194 1 gather.go:180] gatherer "clusterconfig" function "mutating_webhook_configurations" took 35.479333ms to process 3 records I0216 11:14:27.548276 1 tasks_processing.go:74] worker 44 stopped. I0216 11:14:27.548293 1 recorder.go:75] Recording config/ingress with fingerprint=d0e376bbd3b64f68a18ffb143d5732c222de40280473668d49e2a85ec05623bb I0216 11:14:27.548298 1 gather.go:180] gatherer "clusterconfig" function "ingress" took 35.579881ms to process 1 records I0216 11:14:27.548306 1 gather.go:180] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 36.561332ms to process 0 records I0216 11:14:27.548310 1 tasks_processing.go:74] worker 27 stopped. I0216 11:14:27.548641 1 tasks_processing.go:74] worker 17 stopped. I0216 11:14:27.548672 1 gather.go:180] gatherer "clusterconfig" function "certificate_signing_requests" took 38.688527ms to process 0 records I0216 11:14:27.552323 1 tasks_processing.go:74] worker 13 stopped. I0216 11:14:27.553481 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-glqs4 with fingerprint=ad5d731b4a49b9ba86f67de999502235fd68160a372e1fa855818333a3e32a8e I0216 11:14:27.553527 1 recorder.go:75] Recording config/running_containers with fingerprint=abab7253fda84fe3a410390d9fa04c20900862dd1ca986e23dc17f07afbb8535 I0216 11:14:27.553536 1 gather.go:180] gatherer "clusterconfig" function "container_images" took 43.349225ms to process 2 records I0216 11:14:27.554039 1 sca.go:98] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/certificates. Next check is in 8h0m0s I0216 11:14:27.554041 1 cluster_transfer.go:78] checking the availability of cluster transfer. Next check is in 12h0m0s W0216 11:14:27.554171 1 operator.go:286] started I0216 11:14:27.554185 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer I0216 11:14:27.558749 1 tasks_processing.go:74] worker 9 stopped. I0216 11:14:27.558952 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=399a83ba0d2698693725eb15693edbde6f34fbb0a5d310efae33257afa87784e I0216 11:14:27.559028 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=93c1b9715a221ef4412783c81b5cd0c91af3ff94651c11395ef0facb8d5a89c7 I0216 11:14:27.559256 1 gather.go:180] gatherer "clusterconfig" function "clusterroles" took 47.209045ms to process 2 records I0216 11:14:27.559362 1 tasks_processing.go:74] worker 39 stopped. I0216 11:14:27.560215 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=03ef2a6dd87769af041815d915f98984815cbde5844f0e6cd66f1e4a4a32e65e I0216 11:14:27.560629 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=06aedeb1c5dec76e36f8da5e9185ec39e2d907039113d93896998b5efa12d04d I0216 11:14:27.560900 1 gather.go:180] gatherer "clusterconfig" function "crds" took 48.234052ms to process 2 records I0216 11:14:27.563914 1 tasks_processing.go:74] worker 23 stopped. I0216 11:14:27.563926 1 gather.go:180] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 52.835386ms to process 0 records I0216 11:14:27.568550 1 controller.go:203] Source scaController *sca.Controller is not ready I0216 11:14:27.568563 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0216 11:14:27.568568 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 11:14:27.568571 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 11:14:27.568576 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 11:14:27.568596 1 controller.go:457] The operator is still being initialized I0216 11:14:27.568601 1 controller.go:482] The operator is healthy I0216 11:14:27.570565 1 requests.go:204] Asking for SCA certificate for x86_64 architecture I0216 11:14:27.571066 1 prometheus_rules.go:88] Prometheus rules successfully created E0216 11:14:27.573400 1 cluster_transfer.go:90] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%!b(MISSING)c256-c115-4836-b825-e79c3c016999%!+(MISSING)and+status+is+%!a(MISSING)ccepted%!"(MISSING): dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.12:52811->172.30.0.10:53: read: connection refused I0216 11:14:27.573416 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27374bc256-c115-4836-b825-e79c3c016999%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.12:52811->172.30.0.10:53: read: connection refused W0216 11:14:27.573402 1 sca.go:117] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.12:52811->172.30.0.10:53: read: connection refused I0216 11:14:27.573433 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.12:52811->172.30.0.10:53: read: connection refused I0216 11:14:27.576254 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0216 11:14:27.576255 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 11:14:27.576328 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController I0216 11:14:27.579681 1 base_controller.go:73] Caches are synced for ConfigController I0216 11:14:27.579695 1 base_controller.go:110] Starting #1 worker of ConfigController controller ... I0216 11:14:27.585124 1 tasks_processing.go:74] worker 10 stopped. E0216 11:14:27.585143 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0216 11:14:27.585149 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0216 11:14:27.585152 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0216 11:14:27.585177 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0216 11:14:27.585191 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0216 11:14:27.585196 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=0bddb88b072029f25dde6f44cb877a44fb2f65ed4864939fbf7a3e42c0a485f6 I0216 11:14:27.585201 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0216 11:14:27.585219 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0216 11:14:27.585228 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0216 11:14:27.585232 1 gather.go:180] gatherer "clusterconfig" function "config_maps" took 77.030497ms to process 6 records I0216 11:14:27.586279 1 tasks_processing.go:74] worker 1 stopped. I0216 11:14:27.586292 1 configmapobserver.go:84] configmaps "insights-config" not found E0216 11:14:27.586292 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0216 11:14:27.586302 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ogak4isp3kfeculplc268j75l7kt9p5-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ogak4isp3kfeculplc268j75l7kt9p5-primary-cert-bundle-secret" not found I0216 11:14:27.586357 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=276770a040d9ffb5f89939f4e1e1b2e342b3b816443c29eaff49d2bc4ee1d9c2 I0216 11:14:27.586368 1 gather.go:180] gatherer "clusterconfig" function "ingress_certificates" took 78.227662ms to process 1 records I0216 11:14:27.654225 1 base_controller.go:73] Caches are synced for LoggingSyncer I0216 11:14:27.654254 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... I0216 11:14:27.968212 1 gather_cluster_operator_pods_and_events.go:119] Found 18 pods with 21 containers I0216 11:14:27.968226 1 gather_cluster_operator_pods_and_events.go:233] Maximum buffer size: 1198372 bytes I0216 11:14:27.968750 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-9n6n4 pod in namespace openshift-dns (previous: false). I0216 11:14:28.212314 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9n6n4 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-9n6n4\" is waiting to start: ContainerCreating" I0216 11:14:28.212333 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-9n6n4\" is waiting to start: ContainerCreating" I0216 11:14:28.212340 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-9n6n4 pod in namespace openshift-dns (previous: false). I0216 11:14:28.376438 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9n6n4 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-9n6n4\" is waiting to start: ContainerCreating" I0216 11:14:28.376459 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-9n6n4\" is waiting to start: ContainerCreating" I0216 11:14:28.376469 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-9tgks pod in namespace openshift-dns (previous: false). W0216 11:14:28.537426 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 11:14:28.600779 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9tgks pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-9tgks\" is waiting to start: ContainerCreating" I0216 11:14:28.600796 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-9tgks\" is waiting to start: ContainerCreating" I0216 11:14:28.600804 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-9tgks pod in namespace openshift-dns (previous: false). I0216 11:14:28.773539 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0216 11:14:28.780183 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9tgks pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-9tgks\" is waiting to start: ContainerCreating" I0216 11:14:28.780196 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-9tgks\" is waiting to start: ContainerCreating" I0216 11:14:28.780206 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-tdvs7 pod in namespace openshift-dns (previous: false). I0216 11:14:28.998001 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-tdvs7 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-tdvs7\" is waiting to start: ContainerCreating" I0216 11:14:28.998020 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-tdvs7\" is waiting to start: ContainerCreating" I0216 11:14:28.998030 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-tdvs7 pod in namespace openshift-dns (previous: false). I0216 11:14:29.170474 1 tasks_processing.go:74] worker 2 stopped. I0216 11:14:29.170530 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=bd61a15c70981c7f586451421c197973f3fb8e15ca05dd03d78eeddc3016948c I0216 11:14:29.170558 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=29657ddcf188525aca2f4dcbd2b6e356133eb0324c8561ea49bf5fd150615eaf I0216 11:14:29.170619 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0216 11:14:29.170645 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=f817f87519f2778d7fe178e18edbafebbaa852b841f8b83b5eccc14776065c23 I0216 11:14:29.170664 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0216 11:14:29.170682 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=4185f1ccd80fdf8e40358673b8e56d26a6443f518fbfc494dc9b6ad5681259fe I0216 11:14:29.170701 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=8da77063eba2a458b2fa0a4df445f6dabde721adc8d4f9e8d2f570f44dc0665a I0216 11:14:29.170734 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=bd9f75811c5ba20f267ae5c4669440bd46a38b904ae83a7b33f16ddfff44dbe2 I0216 11:14:29.170747 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=96de24bf22d3c1dcc1fed6d1d55dd01582573aac1b68d69e4eefd205250bc902 I0216 11:14:29.170761 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=02e989eef1b4fe60e6adb6422fc68b3171ba88fe4503b72d96f9bee44ac080c4 I0216 11:14:29.170769 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0216 11:14:29.170779 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=eb6d93615b6a03f9dc1a6f150945952eed0556c97dbdfb140a1e170904d443c7 I0216 11:14:29.170789 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0216 11:14:29.170798 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=59717afed44a7eb087d9310d7d6384b1a68df89bed239061efae8e79f5be3507 I0216 11:14:29.170807 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0216 11:14:29.170817 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=fbdfd576187cdc26acd7af8f1c3ad77e414ebe7eec90e37ea4fa0993c4612e3f I0216 11:14:29.170829 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0216 11:14:29.170844 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=878e6d1e79b06df3d50f042c0dc9a345c839eab8a9ce834458d0b4d672c4d34b I0216 11:14:29.170904 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=0ff09c9f41d179198cf223d301295d2f3082932ea228eba6c463de5a901fd24c I0216 11:14:29.170911 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0216 11:14:29.170917 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0216 11:14:29.170933 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0216 11:14:29.170961 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=7d8574a749b49cdd9e49bbf1b80559802513748f1b13dc596f17298977686620 I0216 11:14:29.170976 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=0605926098b56866ce049d886e68099ee784cc2810c2583e4d83f9974a0a543f I0216 11:14:29.170985 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0216 11:14:29.170995 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=53803b37b9a92dad5fb1e64c532b44d2130946e457239b1040b02d85aeca5bf5 I0216 11:14:29.171003 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0216 11:14:29.171011 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=5ea691d0360cb39abfc39b6583f45d044af93690df6886c0ca821860516b9e16 I0216 11:14:29.171021 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=71af02942debdf8416d8aa628662281fc2a31c29cca62b007cf1ca44b4679544 I0216 11:14:29.171029 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=cc4ae6e6842e97b06b5d6c9e290859bf740049a8d88d3e81d1329196f08aa04a I0216 11:14:29.171040 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=eccea9fd11a36db0ae524f3ee526844299d38d986faccdb0dbd50879e22ddfae I0216 11:14:29.171051 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=9eed620656ff44cde0fd05d800aaed41baa17f22b68df45611beff6f247b4a1b I0216 11:14:29.171066 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=22f9f7fbcf824e6e62ad28462770953143152b56679eeeb80abc5009bf88bf35 I0216 11:14:29.171079 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=7e1ab8f8cfcd9d249b5b213939fe5144bb83db3725475461728bea44a002c3be I0216 11:14:29.171086 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0216 11:14:29.171091 1 gather.go:180] gatherer "clusterconfig" function "operators" took 1.662327213s to process 35 records I0216 11:14:29.174456 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-tdvs7 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-tdvs7\" is waiting to start: ContainerCreating" I0216 11:14:29.174468 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-tdvs7\" is waiting to start: ContainerCreating" I0216 11:14:29.174477 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-dx7vz pod in namespace openshift-dns (previous: false). I0216 11:14:29.373484 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:29.373503 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-sffnm pod in namespace openshift-dns (previous: false). W0216 11:14:29.537605 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 11:14:29.575803 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:29.575822 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-ttszj pod in namespace openshift-dns (previous: false). I0216 11:14:29.772792 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:29.772809 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-6fd859f845-gbpcp pod in namespace openshift-image-registry (previous: false). I0216 11:14:29.974197 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-6fd859f845-gbpcp pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6fd859f845-gbpcp\" is waiting to start: ContainerCreating" I0216 11:14:29.974214 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-6fd859f845-gbpcp\" is waiting to start: ContainerCreating" I0216 11:14:29.974223 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-6fd859f845-l7kh9 pod in namespace openshift-image-registry (previous: false). I0216 11:14:30.175667 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-6fd859f845-l7kh9 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6fd859f845-l7kh9\" is waiting to start: ContainerCreating" I0216 11:14:30.175684 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-6fd859f845-l7kh9\" is waiting to start: ContainerCreating" I0216 11:14:30.175694 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-77554b6c66-7dn6n pod in namespace openshift-image-registry (previous: false). I0216 11:14:30.385867 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-77554b6c66-7dn6n pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-77554b6c66-7dn6n\" is waiting to start: ContainerCreating" I0216 11:14:30.385884 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-77554b6c66-7dn6n\" is waiting to start: ContainerCreating" I0216 11:14:30.385895 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-5zd5n pod in namespace openshift-image-registry (previous: false). W0216 11:14:30.537915 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 11:14:30.574841 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:30.574859 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-s56wt pod in namespace openshift-image-registry (previous: false). I0216 11:14:30.773573 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:30.773593 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-zqdlt pod in namespace openshift-image-registry (previous: false). I0216 11:14:30.975156 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 11:14:30.975175 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-5878cf9d4c-d9h57 pod in namespace openshift-ingress (previous: false). I0216 11:14:31.176641 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-5878cf9d4c-d9h57 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5878cf9d4c-d9h57\" is waiting to start: ContainerCreating" I0216 11:14:31.176657 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-5878cf9d4c-d9h57\" is waiting to start: ContainerCreating" I0216 11:14:31.176666 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-5878cf9d4c-w9djj pod in namespace openshift-ingress (previous: false). I0216 11:14:31.374197 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-5878cf9d4c-w9djj pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5878cf9d4c-w9djj\" is waiting to start: ContainerCreating" I0216 11:14:31.374215 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-5878cf9d4c-w9djj\" is waiting to start: ContainerCreating" I0216 11:14:31.374227 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-7d6b8b7d65-nw7q5 pod in namespace openshift-ingress (previous: false). W0216 11:14:31.537303 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 11:14:31.573692 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-7d6b8b7d65-nw7q5 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7d6b8b7d65-nw7q5\" is waiting to start: ContainerCreating" I0216 11:14:31.573727 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-7d6b8b7d65-nw7q5\" is waiting to start: ContainerCreating" I0216 11:14:31.573739 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-6gv9v pod in namespace openshift-ingress-canary (previous: false). I0216 11:14:31.773869 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-6gv9v pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6gv9v\" is waiting to start: ContainerCreating" I0216 11:14:31.773886 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6gv9v\" is waiting to start: ContainerCreating" I0216 11:14:31.773895 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-c5wc5 pod in namespace openshift-ingress-canary (previous: false). I0216 11:14:31.973973 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-c5wc5 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-c5wc5\" is waiting to start: ContainerCreating" I0216 11:14:31.973991 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-c5wc5\" is waiting to start: ContainerCreating" I0216 11:14:31.974000 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-st6gx pod in namespace openshift-ingress-canary (previous: false). I0216 11:14:32.175562 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-st6gx pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-st6gx\" is waiting to start: ContainerCreating" I0216 11:14:32.175579 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-st6gx\" is waiting to start: ContainerCreating" I0216 11:14:32.175597 1 tasks_processing.go:74] worker 38 stopped. I0216 11:14:32.175664 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=d6004223ca9400e52e05cebabff5d28cf6f618fa5cc33b2452431e43b4c0e677 I0216 11:14:32.175699 1 recorder.go:75] Recording events/openshift-dns with fingerprint=c561614b78b6f2222c1268423593429e8379c2ae5b754af0cb6a22ce87b83cfb I0216 11:14:32.175765 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=e4d57f6e50d4751974f8b1d3fff3fce5d1cdac5de5b35cd69c95ac95cb8d6e56 I0216 11:14:32.175782 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=af2dc4042f3138ac20076da23fef7fec444eb146d9ab442f0e95ac2e1f3ad491 I0216 11:14:32.175812 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=758ad695a9d178afc72a38733f83255d3410b42b4f45994f186c8c9d2ca7d36e I0216 11:14:32.175823 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=7673e29b4ce8e51ca042744d40673c8ad125796296c4a0000bb733dad3db58b8 I0216 11:14:32.175829 1 gather.go:180] gatherer "clusterconfig" function "operators_pods_and_events" took 4.667503657s to process 6 records W0216 11:14:32.537987 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0216 11:14:32.538010 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 11:14:32.538024 1 tasks_processing.go:74] worker 59 stopped. E0216 11:14:32.538041 1 gather.go:143] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 11:14:32.538057 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0216 11:14:32.538070 1 gather.go:158] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0216 11:14:32.538091 1 gather.go:180] gatherer "clusterconfig" function "dvo_metrics" took 5.029965765s to process 1 records I0216 11:14:39.741043 1 tasks_processing.go:74] worker 19 stopped. I0216 11:14:39.741073 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0216 11:14:39.741085 1 gather.go:180] gatherer "clusterconfig" function "install_plans" took 12.230576988s to process 1 records I0216 11:14:40.517295 1 tasks_processing.go:74] worker 61 stopped. I0216 11:14:40.517492 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=8a644e309da8e0a3d935874cc9ac83377c54a88d9584a242d829186956c0aaa7 I0216 11:14:40.517506 1 gather.go:180] gatherer "clusterconfig" function "service_accounts" took 13.006548023s to process 1 records E0216 11:14:40.517543 1 periodic.go:252] clusterconfig failed after 13.009s with: function "pod_network_connectivity_checks" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0216 11:14:40.517556 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0216 11:14:40.517562 1 periodic.go:214] Running workloads gatherer I0216 11:14:40.517573 1 tasks_processing.go:45] number of workers: 2 I0216 11:14:40.517578 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 11:14:40.517581 1 tasks_processing.go:71] worker 1 working on workload_info task. I0216 11:14:40.517593 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 11:14:40.517672 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0216 11:14:40.543560 1 gather_workloads_info.go:257] Loaded pods in 0s, will wait 21s for image data I0216 11:14:40.550356 1 tasks_processing.go:74] worker 0 stopped. I0216 11:14:40.550379 1 gather.go:180] gatherer "workloads" function "helmchart_info" took 32.658396ms to process 0 records I0216 11:14:40.553877 1 gather_workloads_info.go:366] No image sha256:b34e84d56775e42b7d832d14c4f9dc302fee37cd81ba221397cd8acba2089d20 (11ms) I0216 11:14:40.564365 1 gather_workloads_info.go:366] No image sha256:0f31e990f9ca9d15dcb1b25325c8265515fcc06381909349bb021103827585c6 (10ms) I0216 11:14:40.579139 1 gather_workloads_info.go:366] No image sha256:0d1d37dbdb726e924b519ef27e52e9719601fab838ae75f72c8aca11e8c3b4cc (15ms) I0216 11:14:40.589679 1 gather_workloads_info.go:366] No image sha256:2bf8536171476b2d616cf62b4d94d2f1dae34aca6ea6bfdb65e764a8d9675891 (11ms) I0216 11:14:40.599873 1 gather_workloads_info.go:366] No image sha256:79449e16b1207223f1209d19888b879eb56a8202c53df4800e09b231392cf219 (10ms) I0216 11:14:40.610050 1 gather_workloads_info.go:366] No image sha256:59f553035bc347fc7205f1c071897bc2606b98525d6b9a3aca62fc9cd7078a57 (10ms) I0216 11:14:40.620384 1 gather_workloads_info.go:366] No image sha256:33d7e5c63340e93b5a063de538017ac693f154e3c27ee2ef8a8a53bb45583552 (10ms) I0216 11:14:40.630908 1 gather_workloads_info.go:366] No image sha256:64ef34275f7ea992f5a4739cf7a724e55806bfab0c752fc0eccc2f70dfecbaf4 (11ms) I0216 11:14:40.643217 1 gather_workloads_info.go:366] No image sha256:7f55b7dbfb15fe36d83d64027eacee22fb00688ccbc03550cc2dbedfa633f288 (12ms) I0216 11:14:40.652949 1 gather_workloads_info.go:366] No image sha256:036e6f9a4609a7499f200032dac2294e4a2d98764464ed17453ef725f2f0264d (10ms) I0216 11:14:40.664705 1 gather_workloads_info.go:366] No image sha256:712ad2760c350db1e23b9393bdda83149452931dc7b5a5038a3bcdb4663917c0 (12ms) I0216 11:14:40.754733 1 gather_workloads_info.go:366] No image sha256:745f2186738a57bb1b484f68431e77aa2f68a1b8dcb434b1f7a4b429eafdf091 (90ms) I0216 11:14:40.854970 1 gather_workloads_info.go:366] No image sha256:27e725f1250f6a17da5eba7ada315a244592b5b822d61e95722bb7e2f884b00f (100ms) I0216 11:14:40.954225 1 gather_workloads_info.go:366] No image sha256:f82357030795138d2081ecc5172092222b0f4faea27e9a7a0474fbeae29111ad (99ms) I0216 11:14:41.055025 1 gather_workloads_info.go:366] No image sha256:357821852af925e0c8a19df2f9fceec8d2e49f9d13575b86ecd3fbedce488afa (101ms) I0216 11:14:41.154429 1 gather_workloads_info.go:366] No image sha256:88e6cc2192e682bb9c4ac5aec8e41254696d909c5dc337e720b9ec165a728064 (99ms) I0216 11:14:41.255580 1 gather_workloads_info.go:366] No image sha256:91d9cb208e6d0c39a87dfe8276d162c75ff3fcd3b005b3e7b537f65c53475a42 (101ms) I0216 11:14:41.354348 1 gather_workloads_info.go:366] No image sha256:185305b7da4ef5b90a90046f145e8c66bab3a16b12771d2e98bf78104d6a60f2 (99ms) I0216 11:14:41.455282 1 gather_workloads_info.go:366] No image sha256:c822bd444a7bc53b21afb9372ff0a24961b2687073f3563c127cce5803801b04 (101ms) I0216 11:14:41.554595 1 gather_workloads_info.go:366] No image sha256:29e41a505a942a77c0d5f954eb302c01921cb0c0d176066fe63f82f3e96e3923 (99ms) I0216 11:14:41.654824 1 gather_workloads_info.go:366] No image sha256:f550296753e9898c67d563b7deb16ba540ca1367944c905415f35537b6b949d4 (100ms) I0216 11:14:41.758144 1 gather_workloads_info.go:366] No image sha256:822db36f8e1353ac24785b88d1fb2150d3ef34a5e739c1f67b61079336e9798b (103ms) I0216 11:14:41.854195 1 gather_workloads_info.go:366] No image sha256:29d1672ef44c59d065737eca330075dd2f6da4ba743153973a739aa9e9d73ad3 (96ms) I0216 11:14:41.954998 1 gather_workloads_info.go:366] No image sha256:2193d7361704b0ae4bca052e9158761e06ecbac9ca3f0a9c8f0f101127e8f370 (101ms) I0216 11:14:42.056089 1 gather_workloads_info.go:366] No image sha256:43e426ac9df633be58006907aede6f9b6322c6cc7985cd43141ad7518847c637 (101ms) I0216 11:14:42.153948 1 gather_workloads_info.go:366] No image sha256:deffb0293fd11f5b40609aa9e80b16b0f90a9480013b2b7f61bd350bbd9b6f07 (98ms) I0216 11:14:42.179611 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 11:14:42.254609 1 gather_workloads_info.go:366] No image sha256:586e9c2756f50e562a6123f47fe38dba5496b63413c3dd18e0b85d6167094f0c (101ms) I0216 11:14:42.354504 1 gather_workloads_info.go:366] No image sha256:9cc55a501aaad1adbefdd573e57c2f756a3a6a8723c43052995be6389edf1fa8 (100ms) I0216 11:14:42.378209 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 11:14:42.453838 1 gather_workloads_info.go:366] No image sha256:457372d9f22e1c726ea1a6fcc54ddca8335bd607d2c357bcd7b63a7017aa5c2b (99ms) I0216 11:14:42.555662 1 gather_workloads_info.go:366] No image sha256:5335f64616c3a6c55a9a6dc4bc084b46a4957fb4fc250afc5343e4547ebb3598 (102ms) I0216 11:14:42.654840 1 gather_workloads_info.go:366] No image sha256:2121717e0222b9e8892a44907b461a4f62b3f1e5429a0e2eee802d48d04fff30 (99ms) I0216 11:14:42.654869 1 tasks_processing.go:74] worker 1 stopped. I0216 11:14:42.655060 1 recorder.go:75] Recording config/workload_info with fingerprint=65c27621a7f1d73bba8c178fb8503232f3f2d68f8fa499d6bd2d978451a92a82 I0216 11:14:42.655075 1 gather.go:180] gatherer "workloads" function "workload_info" took 2.137281178s to process 1 records I0216 11:14:42.655087 1 periodic.go:261] Periodic gather workloads completed in 2.137s I0216 11:14:42.655096 1 controllerstatus.go:80] name=periodic-workloads healthy=true reason= message= I0216 11:14:42.655100 1 periodic.go:214] Running conditional gatherer I0216 11:14:42.660908 1 requests.go:282] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules I0216 11:14:42.665431 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.12:34495->172.30.0.10:53: read: connection refused E0216 11:14:42.665652 1 conditional_gatherer.go:324] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 11:14:42.665698 1 conditional_gatherer.go:386] updating version cache for conditional gatherer I0216 11:14:42.671894 1 conditional_gatherer.go:394] cluster version is '4.17.48' E0216 11:14:42.671908 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671912 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671915 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671917 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671919 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671922 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671924 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671926 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 11:14:42.671928 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing I0216 11:14:42.671943 1 tasks_processing.go:45] number of workers: 3 I0216 11:14:42.671951 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 11:14:42.671955 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0216 11:14:42.671967 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 11:14:42.671978 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 11:14:42.671989 1 tasks_processing.go:71] worker 1 working on conditional_gatherer_rules task. I0216 11:14:42.672000 1 tasks_processing.go:74] worker 1 stopped. I0216 11:14:42.672013 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0216 11:14:42.672085 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0216 11:14:42.672099 1 gather.go:180] gatherer "conditional" function "conditional_gatherer_rules" took 1.034µs to process 1 records I0216 11:14:42.672126 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0216 11:14:42.672133 1 gather.go:180] gatherer "conditional" function "remote_configuration" took 1.438µs to process 1 records I0216 11:14:42.672137 1 gather.go:180] gatherer "conditional" function "rapid_container_logs" took 135.87µs to process 0 records I0216 11:14:42.672145 1 tasks_processing.go:74] worker 2 stopped. I0216 11:14:42.672147 1 tasks_processing.go:74] worker 0 stopped. I0216 11:14:42.672173 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.12:34495->172.30.0.10:53: read: connection refused I0216 11:14:42.672183 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 W0216 11:14:42.695209 1 gather.go:212] can't read cgroups memory usage data: open /sys/fs/cgroup/memory/memory.usage_in_bytes: no such file or directory I0216 11:14:42.695342 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=abb0530944f57303c4308da3b7d8e524d828824a95ee7b7b0c10ec8a45502bdb I0216 11:14:42.695448 1 diskrecorder.go:70] Writing 98 records to /var/lib/insights-operator/insights-2026-02-16-111442.tar.gz I0216 11:14:42.700153 1 diskrecorder.go:51] Wrote 98 records to disk in 4ms I0216 11:14:42.700182 1 periodic.go:283] Gathering cluster info every 2h0m0s I0216 11:14:42.700198 1 periodic.go:284] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0216 11:14:51.654375 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 11:15:31.967547 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="b2ed1b821a9429e97b26895fdcb411a4185f8fdf1b32bc98f74245cfec6898ce") W0216 11:15:31.967585 1 builder.go:155] Restart triggered because of file /var/run/configmaps/service-ca-bundle/service-ca.crt was created I0216 11:15:31.967625 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="6bd9b6673cfe01adfe9dd7d6bb1e829a072aa56efa71d4f387b8e13e03298fba") I0216 11:15:31.967660 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="3055767091ca91d5bff88c8319196178ceaf35f653a931d20048eff89564ea9f") I0216 11:15:31.967728 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0216 11:15:31.967907 1 periodic.go:175] Shutting down I0216 11:15:31.967949 1 base_controller.go:172] Shutting down LoggingSyncer ... I0216 11:15:31.967974 1 base_controller.go:172] Shutting down ConfigController ... E0216 11:15:31.968077 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968114 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968144 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968175 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968204 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968236 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968269 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968301 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968330 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968360 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968393 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968428 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968485 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968523 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968554 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968585 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968616 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968645 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968675 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968696 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968733 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968750 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968787 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968810 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968830 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968846 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968880 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968900 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968917 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968950 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968971 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.968992 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969018 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969054 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969090 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969122 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969154 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969185 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969217 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969248 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969279 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969311 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969342 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969374 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969405 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969437 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969471 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969505 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969535 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969565 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969597 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969628 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969658 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969691 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969745 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969786 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969815 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969849 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969888 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969919 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969951 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.969982 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970013 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970046 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970076 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970108 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970139 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970169 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970198 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970231 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970261 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970291 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970321 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970353 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970386 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970416 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970453 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970484 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970517 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970547 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970578 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970609 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970641 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970683 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0216 11:15:31.970731 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled I0216 11:15:31.970759 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0216 11:15:31.971240 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" I0216 11:15:31.971290 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0216 11:15:31.971315 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" I0216 11:15:31.971369 1 secure_serving.go:258] Stopped listening on [::]:8443 I0216 11:15:31.971392 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" I0216 11:15:31.971505 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting I0216 11:15:31.971525 1 builder.go:330] server exited