W0216 05:42:51.457024 1 cmd.go:245] Using insecure, self-signed certificates I0216 05:42:51.696690 1 start.go:223] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:42:51.696893 1 observer_polling.go:159] Starting file observer I0216 05:42:52.241623 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0216 05:42:52.241806 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0216 05:42:52.242213 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0216 05:42:52.242234 1 secure_serving.go:57] Forcing use of http/1.1 only W0216 05:42:52.242252 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0216 05:42:52.242257 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0216 05:42:52.242261 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0216 05:42:52.242264 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0216 05:42:52.242267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0216 05:42:52.242270 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0216 05:42:52.247547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"9be8b845-5a22-4047-a534-4c81fddf7865", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallPowerVS", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "ExternalOIDC", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "GCPClusterHostedDNS", "GatewayAPI", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesSupport", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} I0216 05:42:52.247570 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BootcNodeManagement BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere ClusterMonitoringConfig DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation MultiArchInstallAWS MultiArchInstallAzure MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NodeDisruptionPolicy NodeSwap OVNObservability OnClusterBuild OpenShiftPodSecurityAdmission PersistentIPsForVirtualization PinnedImages PlatformOperators PrivateHostedZoneAWS ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SignatureStores SigstoreImageVerification StreamingCollectionEncodingToJSON StreamingCollectionEncodingToProtobuf TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesSupport VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] I0216 05:42:52.252215 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0216 05:42:52.252228 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController I0216 05:42:52.252231 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0216 05:42:52.252241 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0216 05:42:52.252271 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 05:42:52.252271 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0216 05:42:52.252470 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-3889205171/tls.crt::/tmp/serving-cert-3889205171/tls.key" I0216 05:42:52.252541 1 secure_serving.go:213] Serving securely on [::]:8443 I0216 05:42:52.252563 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0216 05:42:52.263524 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0216 05:42:52.263551 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 05:42:52.263648 1 base_controller.go:67] Waiting for caches to sync for ConfigController I0216 05:42:52.275276 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 05:42:52.275291 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 05:42:52.285243 1 secretconfigobserver.go:119] support secret does not exist I0216 05:42:52.295244 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 05:42:52.300963 1 secretconfigobserver.go:119] support secret does not exist I0216 05:42:52.311555 1 recorder.go:161] Pruning old reports every 7h10m0s, max age is 288h0m0s I0216 05:42:52.322342 1 periodic.go:214] Running clusterconfig gatherer I0216 05:42:52.322358 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0216 05:42:52.322371 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0216 05:42:52.322379 1 tasks_processing.go:45] number of workers: 64 I0216 05:42:52.322392 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 05:42:52.322394 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0216 05:42:52.322401 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 05:42:52.322401 1 insightsreport.go:296] Starting report retriever I0216 05:42:52.322405 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0216 05:42:52.322407 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 05:42:52.322414 1 tasks_processing.go:69] worker 22 listening for tasks. I0216 05:42:52.322416 1 tasks_processing.go:69] worker 12 listening for tasks. I0216 05:42:52.322421 1 tasks_processing.go:69] worker 13 listening for tasks. I0216 05:42:52.322422 1 tasks_processing.go:69] worker 3 listening for tasks. I0216 05:42:52.322425 1 tasks_processing.go:71] worker 13 working on container_runtime_configs task. I0216 05:42:52.322425 1 tasks_processing.go:71] worker 12 working on image_pruners task. I0216 05:42:52.322427 1 tasks_processing.go:69] worker 4 listening for tasks. I0216 05:42:52.322426 1 tasks_processing.go:69] worker 33 listening for tasks. I0216 05:42:52.322433 1 tasks_processing.go:69] worker 5 listening for tasks. I0216 05:42:52.322437 1 tasks_processing.go:69] worker 6 listening for tasks. I0216 05:42:52.322443 1 tasks_processing.go:69] worker 7 listening for tasks. I0216 05:42:52.322444 1 tasks_processing.go:69] worker 23 listening for tasks. I0216 05:42:52.322447 1 tasks_processing.go:69] worker 8 listening for tasks. I0216 05:42:52.322444 1 tasks_processing.go:69] worker 50 listening for tasks. I0216 05:42:52.322451 1 tasks_processing.go:69] worker 9 listening for tasks. I0216 05:42:52.322450 1 tasks_processing.go:69] worker 29 listening for tasks. I0216 05:42:52.322456 1 tasks_processing.go:69] worker 30 listening for tasks. I0216 05:42:52.322458 1 tasks_processing.go:69] worker 10 listening for tasks. I0216 05:42:52.322458 1 tasks_processing.go:69] worker 34 listening for tasks. I0216 05:42:52.322460 1 tasks_processing.go:69] worker 31 listening for tasks. I0216 05:42:52.322462 1 tasks_processing.go:69] worker 35 listening for tasks. I0216 05:42:52.322465 1 tasks_processing.go:69] worker 11 listening for tasks. I0216 05:42:52.322465 1 tasks_processing.go:69] worker 32 listening for tasks. I0216 05:42:52.322468 1 tasks_processing.go:69] worker 36 listening for tasks. I0216 05:42:52.322469 1 tasks_processing.go:69] worker 43 listening for tasks. I0216 05:42:52.322464 1 tasks_processing.go:69] worker 28 listening for tasks. I0216 05:42:52.322475 1 tasks_processing.go:69] worker 37 listening for tasks. I0216 05:42:52.322476 1 tasks_processing.go:69] worker 51 listening for tasks. I0216 05:42:52.322470 1 tasks_processing.go:69] worker 59 listening for tasks. I0216 05:42:52.322481 1 tasks_processing.go:69] worker 38 listening for tasks. I0216 05:42:52.322482 1 tasks_processing.go:69] worker 52 listening for tasks. I0216 05:42:52.322474 1 tasks_processing.go:69] worker 44 listening for tasks. I0216 05:42:52.322484 1 tasks_processing.go:69] worker 48 listening for tasks. I0216 05:42:52.322477 1 tasks_processing.go:69] worker 45 listening for tasks. I0216 05:42:52.322487 1 tasks_processing.go:69] worker 53 listening for tasks. I0216 05:42:52.322490 1 tasks_processing.go:69] worker 24 listening for tasks. I0216 05:42:52.322489 1 tasks_processing.go:69] worker 39 listening for tasks. I0216 05:42:52.322494 1 tasks_processing.go:69] worker 49 listening for tasks. I0216 05:42:52.322495 1 tasks_processing.go:69] worker 40 listening for tasks. I0216 05:42:52.322491 1 tasks_processing.go:69] worker 63 listening for tasks. I0216 05:42:52.322499 1 tasks_processing.go:69] worker 55 listening for tasks. I0216 05:42:52.322502 1 tasks_processing.go:71] worker 2 working on sap_license_management_logs task. I0216 05:42:52.322492 1 tasks_processing.go:69] worker 54 listening for tasks. I0216 05:42:52.322500 1 tasks_processing.go:69] worker 47 listening for tasks. I0216 05:42:52.322508 1 tasks_processing.go:69] worker 27 listening for tasks. I0216 05:42:52.322501 1 tasks_processing.go:69] worker 25 listening for tasks. I0216 05:42:52.322525 1 tasks_processing.go:69] worker 46 listening for tasks. I0216 05:42:52.322505 1 tasks_processing.go:69] worker 41 listening for tasks. I0216 05:42:52.322533 1 tasks_processing.go:69] worker 19 listening for tasks. I0216 05:42:52.322511 1 tasks_processing.go:69] worker 60 listening for tasks. I0216 05:42:52.322534 1 tasks_processing.go:71] worker 22 working on sap_config task. I0216 05:42:52.322537 1 tasks_processing.go:69] worker 18 listening for tasks. I0216 05:42:52.322541 1 tasks_processing.go:71] worker 3 working on openstack_dataplanedeployments task. I0216 05:42:52.322544 1 tasks_processing.go:71] worker 4 working on scheduler_logs task. I0216 05:42:52.322546 1 tasks_processing.go:71] worker 33 working on silenced_alerts task. I0216 05:42:52.322548 1 tasks_processing.go:71] worker 18 working on storage_classes task. I0216 05:42:52.322552 1 tasks_processing.go:71] worker 6 working on openshift_authentication_logs task. I0216 05:42:52.322557 1 tasks_processing.go:69] worker 21 listening for tasks. I0216 05:42:52.322562 1 tasks_processing.go:71] worker 53 working on sap_datahubs task. I0216 05:42:52.322571 1 tasks_processing.go:71] worker 21 working on nodes task. I0216 05:42:52.322574 1 tasks_processing.go:71] worker 35 working on version task. I0216 05:42:52.322580 1 tasks_processing.go:71] worker 54 working on dvo_metrics task. I0216 05:42:52.322510 1 tasks_processing.go:69] worker 42 listening for tasks. W0216 05:42:52.322573 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:42:52.322593 1 gather.go:180] gatherer "clusterconfig" function "silenced_alerts" took 37.104µs to process 0 records I0216 05:42:52.322518 1 tasks_processing.go:69] worker 15 listening for tasks. I0216 05:42:52.322517 1 tasks_processing.go:69] worker 58 listening for tasks. I0216 05:42:52.322518 1 tasks_processing.go:69] worker 61 listening for tasks. I0216 05:42:52.322524 1 tasks_processing.go:69] worker 16 listening for tasks. I0216 05:42:52.322525 1 tasks_processing.go:69] worker 14 listening for tasks. I0216 05:42:52.322519 1 tasks_processing.go:69] worker 26 listening for tasks. I0216 05:42:52.322610 1 tasks_processing.go:71] worker 26 working on oauths task. I0216 05:42:52.322610 1 tasks_processing.go:71] worker 11 working on qemu_kubevirt_launcher_logs task. I0216 05:42:52.322612 1 tasks_processing.go:71] worker 14 working on pdbs task. I0216 05:42:52.322616 1 tasks_processing.go:71] worker 23 working on openstack_controlplanes task. I0216 05:42:52.322499 1 tasks_processing.go:71] worker 0 working on container_images task. I0216 05:42:52.322620 1 tasks_processing.go:71] worker 24 working on aggregated_monitoring_cr_names task. I0216 05:42:52.322627 1 tasks_processing.go:71] worker 15 working on machine_autoscalers task. I0216 05:42:52.322674 1 tasks_processing.go:71] worker 61 working on infrastructures task. I0216 05:42:52.322692 1 tasks_processing.go:71] worker 58 working on node_logs task. I0216 05:42:52.322548 1 tasks_processing.go:71] worker 31 working on olm_operators task. I0216 05:42:52.322709 1 tasks_processing.go:71] worker 16 working on storage_cluster task. I0216 05:42:52.322721 1 tasks_processing.go:71] worker 42 working on ingress task. I0216 05:42:52.322511 1 tasks_processing.go:69] worker 56 listening for tasks. I0216 05:42:52.322535 1 tasks_processing.go:69] worker 20 listening for tasks. I0216 05:42:52.322531 1 tasks_processing.go:69] worker 57 listening for tasks. I0216 05:42:52.322541 1 tasks_processing.go:71] worker 5 working on ingress_certificates task. I0216 05:42:52.322743 1 tasks_processing.go:71] worker 8 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0216 05:42:52.322753 1 tasks_processing.go:71] worker 39 working on install_plans task. I0216 05:42:52.322776 1 tasks_processing.go:71] worker 33 working on openshift_machine_api_events task. I0216 05:42:52.322786 1 tasks_processing.go:71] worker 49 working on machine_configs task. I0216 05:42:52.322815 1 tasks_processing.go:71] worker 57 working on config_maps task. I0216 05:42:52.322835 1 tasks_processing.go:71] worker 32 working on machine_healthchecks task. I0216 05:42:52.322841 1 tasks_processing.go:71] worker 56 working on openstack_dataplanenodesets task. I0216 05:42:52.322571 1 tasks_processing.go:71] worker 37 working on support_secret task. I0216 05:42:52.322527 1 tasks_processing.go:69] worker 62 listening for tasks. I0216 05:42:52.323282 1 tasks_processing.go:71] worker 62 working on machine_sets task. I0216 05:42:52.322611 1 tasks_processing.go:71] worker 7 working on sap_pods task. I0216 05:42:52.322884 1 tasks_processing.go:71] worker 63 working on authentication task. I0216 05:42:52.322887 1 tasks_processing.go:71] worker 55 working on clusterroles task. I0216 05:42:52.322892 1 tasks_processing.go:71] worker 20 working on openshift_apiserver_operator_logs task. I0216 05:42:52.322897 1 tasks_processing.go:71] worker 48 working on validating_webhook_configurations task. I0216 05:42:52.322902 1 tasks_processing.go:71] worker 44 working on feature_gates task. I0216 05:42:52.322958 1 tasks_processing.go:71] worker 45 working on certificate_signing_requests task. I0216 05:42:52.322964 1 tasks_processing.go:71] worker 46 working on proxies task. I0216 05:42:52.322968 1 tasks_processing.go:71] worker 47 working on cluster_apiserver task. I0216 05:42:52.322972 1 tasks_processing.go:71] worker 27 working on machine_config_pools task. I0216 05:42:52.322976 1 tasks_processing.go:71] worker 25 working on jaegers task. I0216 05:42:52.322982 1 tasks_processing.go:71] worker 59 working on schedulers task. I0216 05:42:52.322985 1 tasks_processing.go:71] worker 51 working on kube_controller_manager_logs task. I0216 05:42:52.322532 1 tasks_processing.go:71] worker 1 working on cost_management_metrics_configs task. I0216 05:42:52.323029 1 tasks_processing.go:71] worker 19 working on mutating_webhook_configurations task. I0216 05:42:52.323034 1 tasks_processing.go:71] worker 41 working on crds task. I0216 05:42:52.323040 1 tasks_processing.go:71] worker 60 working on networks task. I0216 05:42:52.323099 1 tasks_processing.go:71] worker 43 working on image_registries task. I0216 05:42:52.323105 1 tasks_processing.go:71] worker 36 working on monitoring_persistent_volumes task. I0216 05:42:52.323112 1 tasks_processing.go:71] worker 30 working on openshift_logging task. I0216 05:42:52.323115 1 tasks_processing.go:71] worker 10 working on image task. I0216 05:42:52.323118 1 tasks_processing.go:71] worker 50 working on ceph_cluster task. I0216 05:42:52.323120 1 tasks_processing.go:71] worker 34 working on metrics task. I0216 05:42:52.323123 1 tasks_processing.go:71] worker 9 working on nodenetworkstates task. W0216 05:42:52.324432 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:42:52.324440 1 tasks_processing.go:71] worker 34 working on openstack_version task. I0216 05:42:52.324464 1 gather.go:180] gatherer "clusterconfig" function "metrics" took 21.898µs to process 0 records I0216 05:42:52.323127 1 tasks_processing.go:71] worker 29 working on active_alerts task. I0216 05:42:52.322880 1 tasks_processing.go:71] worker 40 working on operators task. W0216 05:42:52.324548 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:42:52.324555 1 tasks_processing.go:71] worker 29 working on overlapping_namespace_uids task. I0216 05:42:52.324596 1 gather.go:180] gatherer "clusterconfig" function "active_alerts" took 15.304µs to process 0 records I0216 05:42:52.323135 1 tasks_processing.go:71] worker 38 working on tsdb_status task. W0216 05:42:52.324618 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:42:52.324625 1 gather.go:180] gatherer "clusterconfig" function "tsdb_status" took 14.629µs to process 0 records I0216 05:42:52.323137 1 tasks_processing.go:71] worker 28 working on pod_network_connectivity_checks task. I0216 05:42:52.324667 1 tasks_processing.go:71] worker 38 working on nodenetworkconfigurationpolicies task. I0216 05:42:52.322518 1 tasks_processing.go:69] worker 17 listening for tasks. I0216 05:42:52.323223 1 tasks_processing.go:71] worker 52 working on machines task. I0216 05:42:52.324724 1 tasks_processing.go:71] worker 17 working on service_accounts task. I0216 05:42:52.326444 1 tasks_processing.go:71] worker 13 working on lokistack task. I0216 05:42:52.326454 1 gather.go:180] gatherer "clusterconfig" function "container_runtime_configs" took 4.010746ms to process 0 records I0216 05:42:52.328867 1 tasks_processing.go:71] worker 53 working on operators_pods_and_events task. I0216 05:42:52.328876 1 gather.go:180] gatherer "clusterconfig" function "sap_datahubs" took 6.289823ms to process 0 records I0216 05:42:52.328888 1 tasks_processing.go:74] worker 22 stopped. I0216 05:42:52.328895 1 gather.go:180] gatherer "clusterconfig" function "sap_config" took 6.34199ms to process 0 records I0216 05:42:52.330809 1 tasks_processing.go:74] worker 15 stopped. I0216 05:42:52.330817 1 gather.go:180] gatherer "clusterconfig" function "machine_autoscalers" took 8.166285ms to process 0 records I0216 05:42:52.331447 1 tasks_processing.go:74] worker 16 stopped. I0216 05:42:52.331455 1 gather.go:180] gatherer "clusterconfig" function "storage_cluster" took 8.722233ms to process 0 records I0216 05:42:52.331931 1 tasks_processing.go:74] worker 56 stopped. I0216 05:42:52.331938 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 9.082124ms to process 0 records I0216 05:42:52.332931 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0216 05:42:52.332960 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 05:42:52.332965 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 05:42:52.332969 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 05:42:52.332992 1 controller.go:457] The operator is still being initialized I0216 05:42:52.333002 1 controller.go:482] The operator is healthy I0216 05:42:52.333070 1 tasks_processing.go:74] worker 12 stopped. I0216 05:42:52.333609 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=6a1966803189b44e24d573c3f06fd4161aedee8afbd58fdd185554b19db3ce5f I0216 05:42:52.333660 1 gather.go:180] gatherer "clusterconfig" function "image_pruners" took 10.636712ms to process 1 records I0216 05:42:52.333689 1 tasks_processing.go:74] worker 14 stopped. I0216 05:42:52.334089 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=7a0e064f053a78340bc524c87181c7e3056e0b59581efa343baef4a1ce6720e4 I0216 05:42:52.334107 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=156e04e5f2662e6e2ebc3ee38a06852ce417f627b28905f5900666bc8d9ac5cb I0216 05:42:52.334118 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=3b77ed0d1d43e9f78189931938527b9fc3f3dd8a7d96797963567c51928abfe4 I0216 05:42:52.334122 1 gather.go:180] gatherer "clusterconfig" function "pdbs" took 10.819979ms to process 3 records I0216 05:42:52.334188 1 tasks_processing.go:74] worker 26 stopped. I0216 05:42:52.334288 1 recorder.go:75] Recording config/oauth with fingerprint=857cec896d5f95efba5da482e0dcfb44177a064248a8c226e1adea9c515e72ef I0216 05:42:52.334296 1 gather.go:180] gatherer "clusterconfig" function "oauths" took 10.957146ms to process 1 records I0216 05:42:52.336918 1 gather_sap_vsystem_iptables_logs.go:60] SAP resources weren't found I0216 05:42:52.336930 1 tasks_processing.go:74] worker 2 stopped. I0216 05:42:52.336935 1 gather.go:180] gatherer "clusterconfig" function "sap_license_management_logs" took 14.419124ms to process 0 records I0216 05:42:52.338278 1 tasks_processing.go:74] worker 23 stopped. I0216 05:42:52.338285 1 gather.go:180] gatherer "clusterconfig" function "openstack_controlplanes" took 15.654077ms to process 0 records I0216 05:42:52.339711 1 tasks_processing.go:74] worker 32 stopped. E0216 05:42:52.339728 1 gather.go:143] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0216 05:42:52.339742 1 gather.go:180] gatherer "clusterconfig" function "machine_healthchecks" took 16.864873ms to process 0 records I0216 05:42:52.340079 1 tasks_processing.go:74] worker 7 stopped. I0216 05:42:52.340092 1 gather.go:180] gatherer "clusterconfig" function "sap_pods" took 16.730885ms to process 0 records I0216 05:42:52.342695 1 tasks_processing.go:74] worker 49 stopped. I0216 05:42:52.342705 1 gather.go:180] gatherer "clusterconfig" function "machine_configs" took 19.904786ms to process 0 records I0216 05:42:52.342712 1 gather.go:180] gatherer "clusterconfig" function "machine_config_pools" took 18.983054ms to process 0 records E0216 05:42:52.342717 1 gather.go:143] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0216 05:42:52.342718 1 tasks_processing.go:74] worker 27 stopped. I0216 05:42:52.342721 1 gather.go:180] gatherer "clusterconfig" function "support_secret" took 19.797866ms to process 0 records I0216 05:42:52.342725 1 gather.go:180] gatherer "clusterconfig" function "jaegers" took 18.935281ms to process 0 records I0216 05:42:52.342728 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 20.170403ms to process 0 records I0216 05:42:52.342732 1 tasks_processing.go:74] worker 3 stopped. I0216 05:42:52.342734 1 tasks_processing.go:74] worker 37 stopped. I0216 05:42:52.342737 1 tasks_processing.go:74] worker 25 stopped. I0216 05:42:52.342740 1 tasks_processing.go:74] worker 50 stopped. I0216 05:42:52.342744 1 gather.go:180] gatherer "clusterconfig" function "ceph_cluster" took 18.379648ms to process 0 records I0216 05:42:52.342778 1 tasks_processing.go:74] worker 62 stopped. I0216 05:42:52.342786 1 gather.go:180] gatherer "clusterconfig" function "machine_sets" took 19.484036ms to process 0 records I0216 05:42:52.342811 1 tasks_processing.go:74] worker 33 stopped. I0216 05:42:52.342822 1 gather.go:180] gatherer "clusterconfig" function "openshift_machine_api_events" took 20.029138ms to process 0 records I0216 05:42:52.342827 1 gather.go:180] gatherer "clusterconfig" function "openshift_logging" took 18.501577ms to process 0 records I0216 05:42:52.342832 1 tasks_processing.go:74] worker 30 stopped. I0216 05:42:52.342834 1 gather.go:180] gatherer "clusterconfig" function "cost_management_metrics_configs" took 18.86315ms to process 0 records I0216 05:42:52.342839 1 tasks_processing.go:74] worker 1 stopped. I0216 05:42:52.342866 1 gather_logs.go:145] no pods in openshift-kube-scheduler namespace were found I0216 05:42:52.342875 1 tasks_processing.go:74] worker 4 stopped. I0216 05:42:52.342879 1 gather.go:180] gatherer "clusterconfig" function "scheduler_logs" took 20.324092ms to process 0 records I0216 05:42:52.342937 1 gather_logs.go:145] no pods in openshift-authentication namespace were found I0216 05:42:52.342946 1 tasks_processing.go:74] worker 6 stopped. I0216 05:42:52.342949 1 gather.go:180] gatherer "clusterconfig" function "openshift_authentication_logs" took 20.384947ms to process 0 records I0216 05:42:52.343004 1 tasks_processing.go:74] worker 18 stopped. I0216 05:42:52.343053 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=60e848b8ededd06f795933cadbc721be5abe69372fa8ffb07185871839a8ecf7 I0216 05:42:52.343065 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=89e4d839238216c7adc455a2e519f1e0cf759227a07f3c7ada5db437e5e50972 I0216 05:42:52.343070 1 gather.go:180] gatherer "clusterconfig" function "storage_classes" took 20.449931ms to process 2 records W0216 05:42:52.345945 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 05:42:52.352246 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController I0216 05:42:52.352297 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 05:42:52.352386 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0216 05:42:52.352556 1 tasks_processing.go:74] worker 13 stopped. I0216 05:42:52.352567 1 gather.go:180] gatherer "clusterconfig" function "lokistack" took 26.099457ms to process 0 records I0216 05:42:52.352699 1 tasks_processing.go:74] worker 21 stopped. I0216 05:42:52.352895 1 recorder.go:75] Recording config/node/ip-10-0-139-20.ec2.internal with fingerprint=813602728140aa8c9ad077f9e7eaea635fcf6b338a7c0be0619381642d8a15b8 I0216 05:42:52.352934 1 recorder.go:75] Recording config/node/ip-10-0-156-252.ec2.internal with fingerprint=9a4773e480da8b588681794b38b4c89a7304bbd986e6578625adc3e8890686ea I0216 05:42:52.352968 1 recorder.go:75] Recording config/node/ip-10-0-173-185.ec2.internal with fingerprint=75987fb324cccea44f859cc66d24890d7bc36db166466066cabecddb59a8359d I0216 05:42:52.352977 1 gather.go:180] gatherer "clusterconfig" function "nodes" took 30.121484ms to process 3 records I0216 05:42:52.353638 1 gather_logs.go:145] no pods in openshift-apiserver-operator namespace were found I0216 05:42:52.353652 1 tasks_processing.go:74] worker 20 stopped. I0216 05:42:52.353659 1 gather.go:180] gatherer "clusterconfig" function "openshift_apiserver_operator_logs" took 30.183501ms to process 0 records E0216 05:42:52.353666 1 gather.go:143] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0216 05:42:52.353707 1 tasks_processing.go:74] worker 52 stopped. I0216 05:42:52.353707 1 gather.go:180] gatherer "clusterconfig" function "machines" took 28.927814ms to process 0 records I0216 05:42:52.353715 1 gather.go:180] gatherer "clusterconfig" function "openstack_version" took 29.236631ms to process 0 records I0216 05:42:52.353757 1 tasks_processing.go:74] worker 34 stopped. I0216 05:42:52.353779 1 recorder.go:75] Recording config/proxy with fingerprint=7cdde73e99f87a4c56ff45b6954a32f5e424a51b3b22886fbfd567d3e7fbb38e I0216 05:42:52.353791 1 gather.go:180] gatherer "clusterconfig" function "proxies" took 30.063575ms to process 1 records I0216 05:42:52.353796 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 29.025582ms to process 0 records I0216 05:42:52.353815 1 tasks_processing.go:74] worker 46 stopped. I0216 05:42:52.353822 1 tasks_processing.go:74] worker 38 stopped. I0216 05:42:52.353894 1 tasks_processing.go:74] worker 44 stopped. I0216 05:42:52.353909 1 recorder.go:75] Recording config/featuregate with fingerprint=88329efc34c8a533e2f90213fc096b98a8f01825ac385ed091c2b9bdc20c46ff I0216 05:42:52.353920 1 gather.go:180] gatherer "clusterconfig" function "feature_gates" took 30.263686ms to process 1 records I0216 05:42:52.353970 1 recorder.go:75] Recording config/image with fingerprint=b1a38ba50ec8574dec20ebe3ac3b1dbe9cd8f51169e7dd4c59b97a97ec20f7a7 I0216 05:42:52.353976 1 gather.go:180] gatherer "clusterconfig" function "image" took 29.447063ms to process 1 records I0216 05:42:52.353978 1 tasks_processing.go:74] worker 10 stopped. I0216 05:42:52.354007 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=2f747c4620cc1d5aa4bbc0d73f7f768063e0526669af3ff13e6e98a8fb394639 I0216 05:42:52.354016 1 gather.go:180] gatherer "clusterconfig" function "schedulers" took 29.979324ms to process 1 records I0216 05:42:52.354025 1 tasks_processing.go:74] worker 59 stopped. I0216 05:42:52.354092 1 tasks_processing.go:74] worker 47 stopped. I0216 05:42:52.354097 1 recorder.go:75] Recording config/apiserver with fingerprint=630f9975bdeaedf72f274431e113c28460d6eb8d1d89e930659eeb96bfecb3ac I0216 05:42:52.354101 1 gather.go:180] gatherer "clusterconfig" function "cluster_apiserver" took 30.174994ms to process 1 records I0216 05:42:52.354192 1 tasks_processing.go:74] worker 42 stopped. I0216 05:42:52.354199 1 recorder.go:75] Recording config/ingress with fingerprint=d39c9179d02c588726920fd0a0950ead61f43e19c2d3bdef7bd4369845d4e3ad I0216 05:42:52.354213 1 gather.go:180] gatherer "clusterconfig" function "ingress" took 31.192748ms to process 1 records I0216 05:42:52.354292 1 tasks_processing.go:74] worker 63 stopped. I0216 05:42:52.354325 1 recorder.go:75] Recording config/authentication with fingerprint=a0fae14e3f9efabfbeab5f68febdc606f54130cabe800499d95232df6be770dc I0216 05:42:52.354331 1 gather.go:180] gatherer "clusterconfig" function "authentication" took 30.611006ms to process 1 records I0216 05:42:52.354551 1 tasks_processing.go:74] worker 28 stopped. E0216 05:42:52.354557 1 gather.go:143] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0216 05:42:52.354562 1 gather.go:180] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 29.919515ms to process 0 records I0216 05:42:52.354570 1 tasks_processing.go:74] worker 9 stopped. I0216 05:42:52.354579 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkstates" took 30.140012ms to process 0 records I0216 05:42:52.354606 1 tasks_processing.go:74] worker 58 stopped. I0216 05:42:52.354622 1 gather.go:180] gatherer "clusterconfig" function "node_logs" took 31.900786ms to process 0 records I0216 05:42:52.354726 1 tasks_processing.go:74] worker 61 stopped. I0216 05:42:52.355106 1 recorder.go:75] Recording config/infrastructure with fingerprint=c7310c25be1dacd255b080cb1dc135871dc6ec662437d5aefbcb414e511ded30 I0216 05:42:52.355117 1 gather.go:180] gatherer "clusterconfig" function "infrastructures" took 32.037871ms to process 1 records I0216 05:42:52.355190 1 tasks_processing.go:74] worker 48 stopped. I0216 05:42:52.355239 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=4977292f35a9262bfe630e5b61affddcaef4e1bcd5b774f50e3b720dfe0b0c09 I0216 05:42:52.355276 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=ab3616b6acb1cb2a4dc6bd6ea230a7412bef8c4d28f151665f14973bed7c0a77 I0216 05:42:52.355289 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=3b307b85528d0ff1397df879bbd3df9aa1bd465925e0cc86642f3b1116e15ef9 I0216 05:42:52.355303 1 recorder.go:75] Recording config/validatingwebhookconfigurations/snapshot.storage.k8s.io with fingerprint=3fa1bf070332574d7419ec62f07eb7549861ba617a012a054babc03800f83a51 I0216 05:42:52.355324 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=b982e2665d439348c0db3f9ebf7374e5b2d100e4b284c1de801dba23b0483e0a I0216 05:42:52.355341 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=aa3eb328c48efb9c10b97535bb9edab145a3ca57d4f27da8ef1496615e8dc576 I0216 05:42:52.355357 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=d4a96edae2e8a6806712a239c8e84090adf536959120748706658aa1c8e68a86 I0216 05:42:52.355378 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=e6c1f3b4c6ebd402f4eb52104fe4a1c9800941013ec6e26e895789d08c19010f I0216 05:42:52.355394 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=7d9d5c50e8277c0f8b3d3d09f02f41b9b6de51085e0b0525d47183ce5f6e8e1b I0216 05:42:52.355413 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=94f1c3dff38d40d08ef487eafe38002ddfbae60def83dd227ff12bd203230793 I0216 05:42:52.355430 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=3a3f095ec7edeeb677e0554c29b10e886cd77a5f6de480c3436efe656f0bc0b4 I0216 05:42:52.355437 1 gather.go:180] gatherer "clusterconfig" function "validating_webhook_configurations" took 31.25118ms to process 11 records I0216 05:42:52.355512 1 recorder.go:75] Recording config/network with fingerprint=1f20cbc2208d7ea1ca2f5f4e04135ffe695f2c974d19f857c626fb2cbd93e577 I0216 05:42:52.355513 1 tasks_processing.go:74] worker 60 stopped. I0216 05:42:52.355519 1 gather.go:180] gatherer "clusterconfig" function "networks" took 30.669507ms to process 1 records I0216 05:42:52.355807 1 gather_logs.go:145] no pods in namespace were found I0216 05:42:52.355822 1 tasks_processing.go:74] worker 11 stopped. I0216 05:42:52.355829 1 gather.go:180] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 33.201667ms to process 0 records I0216 05:42:52.356053 1 tasks_processing.go:74] worker 45 stopped. I0216 05:42:52.356064 1 gather.go:180] gatherer "clusterconfig" function "certificate_signing_requests" took 32.45584ms to process 0 records I0216 05:42:52.363841 1 base_controller.go:73] Caches are synced for ConfigController I0216 05:42:52.363850 1 base_controller.go:110] Starting #1 worker of ConfigController controller ... I0216 05:42:52.365707 1 tasks_processing.go:74] worker 36 stopped. I0216 05:42:52.365720 1 gather.go:180] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 41.487168ms to process 0 records I0216 05:42:52.365969 1 tasks_processing.go:74] worker 35 stopped. I0216 05:42:52.366243 1 gather_logs.go:145] no pods in openshift-kube-controller-manager namespace were found I0216 05:42:52.366602 1 recorder.go:75] Recording config/version with fingerprint=f27bb451f3fdc64684ba152c3a41c4af7efa27cf2a2affeff6c2459a92e293eb I0216 05:42:52.366625 1 recorder.go:75] Recording config/id with fingerprint=dfbb0b4ec684ace2a013333810ee8a2dcae148b71eaaae8ef1b6106f6a8e1a05 I0216 05:42:52.366640 1 gather.go:180] gatherer "clusterconfig" function "version" took 43.385111ms to process 2 records I0216 05:42:52.366650 1 gather.go:180] gatherer "clusterconfig" function "kube_controller_manager_logs" took 42.359926ms to process 0 records I0216 05:42:52.366666 1 tasks_processing.go:74] worker 51 stopped. I0216 05:42:52.367638 1 tasks_processing.go:74] worker 19 stopped. I0216 05:42:52.367742 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=34a92812fcc8d6ae1b3c4561d036768f2eadc03ce1722a71b7ee75ef88bd6b12 I0216 05:42:52.367783 1 sca.go:98] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/certificates. Next check is in 8h0m0s I0216 05:42:52.367798 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=bd161221a758a8a5ce286c3b4a7c2cc5dfa6609436c35ec082b16ca81aa23aa0 I0216 05:42:52.367821 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=4d3af077118970557b73778a7af5ce710da3e4d3614c6efe6d076e9ecb8ccbdf I0216 05:42:52.367832 1 gather.go:180] gatherer "clusterconfig" function "mutating_webhook_configurations" took 43.608503ms to process 3 records I0216 05:42:52.367846 1 cluster_transfer.go:78] checking the availability of cluster transfer. Next check is in 12h0m0s W0216 05:42:52.367859 1 operator.go:286] started I0216 05:42:52.367874 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer I0216 05:42:52.367935 1 tasks_processing.go:74] worker 31 stopped. I0216 05:42:52.367977 1 recorder.go:75] Recording config/olm_operators with fingerprint=86d23f43e5560ad7f1b489d145839e358d867a4cf12b0c134394eb2d9f5a86fc I0216 05:42:52.367987 1 gather.go:180] gatherer "clusterconfig" function "olm_operators" took 45.211407ms to process 1 records I0216 05:42:52.368098 1 tasks_processing.go:74] worker 55 stopped. I0216 05:42:52.368238 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=a3e50962965254ad1615eae051a1d3ab9a24b86c7b0eb95cb4305adf5ee4ec4e I0216 05:42:52.368288 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=4027194eb4a7d8c6608a756090104ec920e4962833fedc0755e31ba78a553b5f I0216 05:42:52.368296 1 gather.go:180] gatherer "clusterconfig" function "clusterroles" took 44.703288ms to process 2 records I0216 05:42:52.368723 1 tasks_processing.go:74] worker 43 stopped. I0216 05:42:52.368983 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=b96cc2f2abb2975cab5768b03b58812b0a3f761fa5d00fa12d6d066b1c29064d I0216 05:42:52.368993 1 gather.go:180] gatherer "clusterconfig" function "image_registries" took 44.524359ms to process 1 records I0216 05:42:52.369003 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0216 05:42:52.369008 1 gather.go:180] gatherer "clusterconfig" function "overlapping_namespace_uids" took 44.439491ms to process 1 records I0216 05:42:52.369012 1 tasks_processing.go:74] worker 29 stopped. I0216 05:42:52.375244 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 05:42:52.376538 1 tasks_processing.go:74] worker 24 stopped. I0216 05:42:52.376561 1 gather.go:180] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 53.910763ms to process 0 records I0216 05:42:52.377778 1 tasks_processing.go:74] worker 41 stopped. I0216 05:42:52.378637 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=77edfc99c350979be5ddbbb0bdf3459a2d38942146abb9d3f709ff3f80ab33ea I0216 05:42:52.378759 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=eb420e00bb06af4f704406c25ec7aee85ec89ed40c1e1555e2460102e6998178 I0216 05:42:52.378768 1 gather.go:180] gatherer "clusterconfig" function "crds" took 53.673905ms to process 2 records I0216 05:42:52.382120 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 05:42:52.382130 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 05:42:52.382133 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 05:42:52.382136 1 controller.go:203] Source scaController *sca.Controller is not ready I0216 05:42:52.382139 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0216 05:42:52.382152 1 controller.go:457] The operator is still being initialized I0216 05:42:52.382158 1 controller.go:482] The operator is healthy I0216 05:42:52.383126 1 tasks_processing.go:74] worker 0 stopped. I0216 05:42:52.384195 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-8srt9 with fingerprint=bc7f8cba8270fa74c564d8fed4988f3c54ccaff9d9b9390a84ccef61487c2807 I0216 05:42:52.384246 1 recorder.go:75] Recording config/running_containers with fingerprint=8d1462a36155941ace2e59c5ddb8370dd8ba75a20af7a7bd7fcd2fc9db3cb5d6 I0216 05:42:52.384256 1 gather.go:180] gatherer "clusterconfig" function "container_images" took 60.495783ms to process 2 records I0216 05:42:52.387686 1 tasks_processing.go:74] worker 8 stopped. I0216 05:42:52.387697 1 gather.go:180] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 64.935989ms to process 0 records I0216 05:42:52.387824 1 requests.go:204] Asking for SCA certificate for x86_64 architecture W0216 05:42:52.389607 1 sca.go:117] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:59929->172.30.0.10:53: read: connection refused E0216 05:42:52.389614 1 cluster_transfer.go:90] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%!c(MISSING)3e63ac-46ed-4adf-a781-bbb947c16fb2%!+(MISSING)and+status+is+%!a(MISSING)ccepted%!"(MISSING): dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:59929->172.30.0.10:53: read: connection refused I0216 05:42:52.389619 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:59929->172.30.0.10:53: read: connection refused I0216 05:42:52.389622 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%276c3e63ac-46ed-4adf-a781-bbb947c16fb2%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:59929->172.30.0.10:53: read: connection refused I0216 05:42:52.390900 1 prometheus_rules.go:88] Prometheus rules successfully created I0216 05:42:52.403963 1 tasks_processing.go:74] worker 57 stopped. E0216 05:42:52.403978 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0216 05:42:52.403985 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0216 05:42:52.403987 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0216 05:42:52.404011 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0216 05:42:52.404019 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=0bddb88b072029f25dde6f44cb877a44fb2f65ed4864939fbf7a3e42c0a485f6 I0216 05:42:52.404023 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0216 05:42:52.404026 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0216 05:42:52.404044 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0216 05:42:52.404050 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0216 05:42:52.404054 1 gather.go:180] gatherer "clusterconfig" function "config_maps" took 81.131549ms to process 6 records I0216 05:42:52.404227 1 tasks_processing.go:74] worker 5 stopped. E0216 05:42:52.404238 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0216 05:42:52.404242 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2og5o3o4rb85d5hrbbgpli04lg1033cq-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2og5o3o4rb85d5hrbbgpli04lg1033cq-primary-cert-bundle-secret" not found I0216 05:42:52.404284 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=61c9c8fbf6829c915e1204434c5f0c64e65707868798340a5c25db8caab4bc41 I0216 05:42:52.404293 1 gather.go:180] gatherer "clusterconfig" function "ingress_certificates" took 81.482294ms to process 1 records I0216 05:42:52.468756 1 base_controller.go:73] Caches are synced for LoggingSyncer I0216 05:42:52.468768 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... W0216 05:42:53.345978 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 05:42:53.592622 1 gather_cluster_operator_pods_and_events.go:119] Found 20 pods with 24 containers I0216 05:42:53.592706 1 gather_cluster_operator_pods_and_events.go:233] Maximum buffer size: 1048576 bytes I0216 05:42:53.592835 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-6vhgk pod in namespace openshift-dns (previous: false). I0216 05:42:53.598820 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0216 05:42:53.862095 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-6vhgk pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-6vhgk\" is waiting to start: ContainerCreating" I0216 05:42:53.862121 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-6vhgk\" is waiting to start: ContainerCreating" I0216 05:42:53.862131 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-6vhgk pod in namespace openshift-dns (previous: false). I0216 05:42:54.008794 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-6vhgk pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-6vhgk\" is waiting to start: ContainerCreating" I0216 05:42:54.008816 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-6vhgk\" is waiting to start: ContainerCreating" I0216 05:42:54.008829 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-h227c pod in namespace openshift-dns (previous: false). I0216 05:42:54.200495 1 tasks_processing.go:74] worker 40 stopped. I0216 05:42:54.200550 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=5903b3e8735115318e205163ca7c7d765fddeaade86c1c0f5d2c655e903c6201 I0216 05:42:54.200581 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=ce37cc4659d950a941aa3bda11223a3e85a84226abb506b3fbe49999a7365fef I0216 05:42:54.200621 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0216 05:42:54.200643 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=81d43adf14d7d24ccc797ff7dd830880b20159f24f62f9968114057ad6c20ea5 I0216 05:42:54.200660 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0216 05:42:54.200676 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=ca0bd1fa660ac902792bbe3b611e6106ede9acbfcd7540b2e83b2c5dc6879d4c I0216 05:42:54.200701 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=7bbf90e57da764f9d64b1c1565fcaa7c01e4b91985969511e088973c7b74a14f I0216 05:42:54.200721 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=cfc2d33437f2887ba6d9588ec730ec9cccf0069747cc42b8bd090427bd0a7e65 I0216 05:42:54.200733 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=ecf75ca283721ff3650b7fe9f6bd0776d95d48ce09721302324d4fcdd03ff214 I0216 05:42:54.200747 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=7d431f73fc5ad36c0462fba55af8d947c5531c62b374cbb185733655d70d8069 I0216 05:42:54.200755 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0216 05:42:54.200765 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=a19f50d05b52d5c3cfddbbddf00ab947e35d408937a9d24759d8e665482a590e I0216 05:42:54.200775 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0216 05:42:54.200785 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=60800005e2c5f261de27fc442d3d669fe6c39e418926176aac1f6c2f8c2d5ecc I0216 05:42:54.200792 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0216 05:42:54.200805 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=a8548c6897a51447bbf581c0fa099fafe5ef9b7bef33beac2b76c47885546d91 I0216 05:42:54.200814 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0216 05:42:54.200824 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=942d9a6a204a666551920fe60bd2e85d329f402340176dc9fe80202d1341efd7 I0216 05:42:54.200893 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=9973b6022ac57bb1f91df19b0662db12f00555d695861c3b1b8bd9cb6e8231ca I0216 05:42:54.200901 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0216 05:42:54.200907 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0216 05:42:54.200922 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0216 05:42:54.200938 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=833fac2427185da8abfedc8220df60bee159ceee6d464fb52c431f10ca6c5be2 I0216 05:42:54.200953 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=df839ca62a6ff2cc7c274a11df6c8b438d8a0be68f635561362c25b5a77c6d4f I0216 05:42:54.200961 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0216 05:42:54.200971 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=dcafe14179bf9cdf34d474ea5da6aef879f2bb4f9bf4376552331f128c825bd0 I0216 05:42:54.200978 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0216 05:42:54.200989 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=7abaedf427cffbc202e90e496260213e759cef2aa8c65019fd284742ce4a5b85 I0216 05:42:54.201000 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=89c82e11924ae48fc7097e0c394e3fc97affe6ec5ffff3e397c89c72b90763a6 I0216 05:42:54.201012 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=a94b1a71ef603e8df45f0298c68c4a90c2d2c42774fead2b5365dd1b321e10f6 I0216 05:42:54.201021 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=d5980bcfb0d6175192aa7e20a1c4f31dcf298bbb8e2385783e968538e342303d I0216 05:42:54.201035 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=5760e94104d0b6cabfb59f3441462d09fab83e5f83c3a1c2a76088a9f131bfe1 I0216 05:42:54.201043 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0216 05:42:54.201059 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=92911e6e5b14ef8eb30ae9286cba1af6cdfce9167216f59e04410df60e525d3e I0216 05:42:54.201071 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=7e1ab8f8cfcd9d249b5b213939fe5144bb83db3725475461728bea44a002c3be I0216 05:42:54.201079 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0216 05:42:54.201093 1 gather.go:180] gatherer "clusterconfig" function "operators" took 1.875938695s to process 36 records I0216 05:42:54.215840 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-h227c pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-h227c\" is waiting to start: ContainerCreating" I0216 05:42:54.215854 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-h227c\" is waiting to start: ContainerCreating" I0216 05:42:54.215862 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-h227c pod in namespace openshift-dns (previous: false). W0216 05:42:54.345870 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 05:42:54.396998 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-h227c pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-h227c\" is waiting to start: ContainerCreating" I0216 05:42:54.397011 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-h227c\" is waiting to start: ContainerCreating" I0216 05:42:54.397036 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-xxlpm pod in namespace openshift-dns (previous: false). I0216 05:42:54.619963 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-xxlpm pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-xxlpm\" is waiting to start: ContainerCreating" I0216 05:42:54.619979 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-xxlpm\" is waiting to start: ContainerCreating" I0216 05:42:54.619986 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-xxlpm pod in namespace openshift-dns (previous: false). I0216 05:42:54.797450 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-xxlpm pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-xxlpm\" is waiting to start: ContainerCreating" I0216 05:42:54.797467 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-xxlpm\" is waiting to start: ContainerCreating" I0216 05:42:54.797475 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-4ntkt pod in namespace openshift-dns (previous: false). I0216 05:42:54.997350 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:54.997365 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-hln65 pod in namespace openshift-dns (previous: false). I0216 05:42:55.197332 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:55.197349 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-tp8rj pod in namespace openshift-dns (previous: false). W0216 05:42:55.345742 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 05:42:55.398539 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:55.398587 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-579c47995f-hn4xg pod in namespace openshift-image-registry (previous: false). I0216 05:42:55.596737 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-579c47995f-hn4xg pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-579c47995f-hn4xg\" is waiting to start: ContainerCreating" I0216 05:42:55.596755 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-579c47995f-hn4xg\" is waiting to start: ContainerCreating" I0216 05:42:55.596795 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-579c47995f-pcjtk pod in namespace openshift-image-registry (previous: false). I0216 05:42:55.796574 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-579c47995f-pcjtk pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-579c47995f-pcjtk\" is waiting to start: ContainerCreating" I0216 05:42:55.796589 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-579c47995f-pcjtk\" is waiting to start: ContainerCreating" I0216 05:42:55.796626 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-5b786844cb-s9xt6 pod in namespace openshift-image-registry (previous: false). I0216 05:42:55.996437 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-5b786844cb-s9xt6 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-5b786844cb-s9xt6\" is waiting to start: ContainerCreating" I0216 05:42:55.996453 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-5b786844cb-s9xt6\" is waiting to start: ContainerCreating" I0216 05:42:55.996461 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-5gczw pod in namespace openshift-image-registry (previous: false). I0216 05:42:56.197358 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:56.197376 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-b6n9h pod in namespace openshift-image-registry (previous: false). W0216 05:42:56.345506 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 05:42:56.396027 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:56.396041 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-qx49h pod in namespace openshift-image-registry (previous: false). I0216 05:42:56.598214 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 05:42:56.598230 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-5dc988d5d9-nfg6v pod in namespace openshift-ingress (previous: false). I0216 05:42:56.797158 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-5dc988d5d9-nfg6v pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5dc988d5d9-nfg6v\" is waiting to start: ContainerCreating" I0216 05:42:56.797196 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-5dc988d5d9-nfg6v\" is waiting to start: ContainerCreating" I0216 05:42:56.797206 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-5dc988d5d9-nwqsk pod in namespace openshift-ingress (previous: false). I0216 05:42:56.997230 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-5dc988d5d9-nwqsk pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5dc988d5d9-nwqsk\" is waiting to start: ContainerCreating" I0216 05:42:56.997247 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-5dc988d5d9-nwqsk\" is waiting to start: ContainerCreating" I0216 05:42:56.997256 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-87c55d88f-2bvbr pod in namespace openshift-ingress (previous: false). I0216 05:42:57.202973 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-87c55d88f-2bvbr pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-87c55d88f-2bvbr\" is waiting to start: ContainerCreating" I0216 05:42:57.202995 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-87c55d88f-2bvbr\" is waiting to start: ContainerCreating" I0216 05:42:57.203033 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-6qrbn pod in namespace openshift-ingress-canary (previous: false). W0216 05:42:57.346266 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0216 05:42:57.346291 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 05:42:57.346303 1 tasks_processing.go:74] worker 54 stopped. E0216 05:42:57.346312 1 gather.go:143] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 05:42:57.346322 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0216 05:42:57.346336 1 gather.go:158] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0216 05:42:57.346346 1 gather.go:180] gatherer "clusterconfig" function "dvo_metrics" took 5.023715134s to process 1 records I0216 05:42:57.397787 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-6qrbn pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6qrbn\" is waiting to start: ContainerCreating" I0216 05:42:57.397802 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-6qrbn\" is waiting to start: ContainerCreating" I0216 05:42:57.397815 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-8rw46 pod in namespace openshift-ingress-canary (previous: false). I0216 05:42:57.596942 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-8rw46 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-8rw46\" is waiting to start: ContainerCreating" I0216 05:42:57.596958 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-8rw46\" is waiting to start: ContainerCreating" I0216 05:42:57.596967 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-q6hpf pod in namespace openshift-ingress-canary (previous: false). I0216 05:42:57.797230 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-q6hpf pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-q6hpf\" is waiting to start: ContainerCreating" I0216 05:42:57.797246 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-q6hpf\" is waiting to start: ContainerCreating" I0216 05:42:57.797256 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for migrator container migrator-6f6b87f846-tkkjc pod in namespace openshift-kube-storage-version-migrator (previous: false). I0216 05:42:57.997910 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for graceful-termination container migrator-6f6b87f846-tkkjc pod in namespace openshift-kube-storage-version-migrator (previous: false). I0216 05:42:58.198005 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-85dccb8957-pspsw pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0216 05:42:58.398616 1 tasks_processing.go:74] worker 53 stopped. I0216 05:42:58.398709 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=80febd7158d54170a122fc2b95ff5226917dbc54d5b59ada17d96b2d7c575bc8 I0216 05:42:58.398753 1 recorder.go:75] Recording events/openshift-dns with fingerprint=a452d675c5094814abbb8897f209c0a79e0df502270a89c6930cde2b7e8f3505 I0216 05:42:58.398809 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=6c85979492e11d268125425eea8af027ca9f1ff29c0a770e6c9e96b232732778 I0216 05:42:58.398831 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=1bdeb3804114bea165980b87877d282e50c71b6fdc07f73c6a323731a739d17f I0216 05:42:58.398867 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=2a9f1300e6bbe990d655868ae0206823b1975d9f3aa78563b0e894c818939261 I0216 05:42:58.398885 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=e9fb41b6caf9cf1c0c831426924c062fefeefc902ed4aaa979bfe3f534aff736 I0216 05:42:58.398901 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=8b83b4ca86279116e81ef68435fcda18d5db2a67732bcf8b77ee5f92ad1ffb6a I0216 05:42:58.398935 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=fe280b91eb860c48183d7e3cf55a546d9faca8b97dfa16da85335b16dd8bdf39 I0216 05:42:58.399028 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-xxlpm with fingerprint=26d8c4efbb548affe84fa7b32087a9984a608e39396dd11019bcdc55fdbb4836 I0216 05:42:58.399108 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-579c47995f-hn4xg with fingerprint=4cffbc6829b83f3e41edbf1be88726255468168e9eaa3a54c862a9853d0d2dbe I0216 05:42:58.399166 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-579c47995f-pcjtk with fingerprint=3292b5b38006c4b7878a5e367afbc2a90cbb45e9a8bdcc1565c43796fc4b2a9c I0216 05:42:58.399251 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-5b786844cb-s9xt6 with fingerprint=0eab27a5b237a3800b7ddbc0715966b3ebcbc46ec82158a66e7bd72c57488f70 I0216 05:42:58.399296 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-6qrbn with fingerprint=f90819a5d62094d323c7b1b9cd1ec0c09b25f47faef79add0a0fa6f9e691c788 I0216 05:42:58.399308 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-6f6b87f846-tkkjc/migrator_current.log with fingerprint=80b164186ab49ef3e39b6037daf0501613be15564b60dfb4084a4255588755c2 I0216 05:42:58.399313 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-6f6b87f846-tkkjc/graceful-termination_current.log with fingerprint=71218e4a8b8383d89326a2880c8bd4766ce829db6f3a467622e63c76610a9733 I0216 05:42:58.399336 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-85dccb8957-pspsw/kube-storage-version-migrator-operator_current.log with fingerprint=6a8d1dae632de82d6ea289ad4e7867949f210a11cf5af1c67ee016c2e06bda56 I0216 05:42:58.399347 1 gather.go:180] gatherer "clusterconfig" function "operators_pods_and_events" took 6.069729719s to process 16 records I0216 05:43:04.961218 1 tasks_processing.go:74] worker 39 stopped. I0216 05:43:04.961260 1 recorder.go:75] Recording config/installplans with fingerprint=b6ae0e2549358513c087729c711e8e1ad6f2144adc0ffa716b1a475ed1e6ddde I0216 05:43:04.961273 1 gather.go:180] gatherer "clusterconfig" function "install_plans" took 12.638450119s to process 1 records I0216 05:43:05.539828 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 05:43:05.731233 1 tasks_processing.go:74] worker 17 stopped. I0216 05:43:05.731443 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=31d61b00db1b6c6e4f94fabe25f220a01ff609d3c737340c15110c154325b279 I0216 05:43:05.731476 1 gather.go:180] gatherer "clusterconfig" function "service_accounts" took 13.406484103s to process 1 records E0216 05:43:05.731520 1 periodic.go:252] clusterconfig failed after 13.409s with: function "machine_healthchecks" failed with an error, function "support_secret" failed with an error, function "machines" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0216 05:43:05.731531 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "support_secret" failed with an error, function "machines" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0216 05:43:05.731537 1 periodic.go:214] Running workloads gatherer I0216 05:43:05.731548 1 tasks_processing.go:45] number of workers: 2 I0216 05:43:05.731553 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 05:43:05.731557 1 tasks_processing.go:71] worker 1 working on helmchart_info task. I0216 05:43:05.731564 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 05:43:05.731583 1 tasks_processing.go:71] worker 0 working on workload_info task. I0216 05:43:05.741883 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 05:43:05.753879 1 gather_workloads_info.go:257] Loaded pods in 0s, will wait 22s for image data I0216 05:43:05.764634 1 gather_workloads_info.go:366] No image sha256:79449e16b1207223f1209d19888b879eb56a8202c53df4800e09b231392cf219 (11ms) I0216 05:43:05.767006 1 tasks_processing.go:74] worker 1 stopped. I0216 05:43:05.767021 1 gather.go:180] gatherer "workloads" function "helmchart_info" took 35.439822ms to process 0 records I0216 05:43:05.774665 1 gather_workloads_info.go:366] No image sha256:0f31e990f9ca9d15dcb1b25325c8265515fcc06381909349bb021103827585c6 (10ms) I0216 05:43:05.784513 1 gather_workloads_info.go:366] No image sha256:0d1d37dbdb726e924b519ef27e52e9719601fab838ae75f72c8aca11e8c3b4cc (10ms) I0216 05:43:05.794354 1 gather_workloads_info.go:366] No image sha256:b34e84d56775e42b7d832d14c4f9dc302fee37cd81ba221397cd8acba2089d20 (10ms) I0216 05:43:05.804027 1 gather_workloads_info.go:366] No image sha256:2bf8536171476b2d616cf62b4d94d2f1dae34aca6ea6bfdb65e764a8d9675891 (10ms) I0216 05:43:05.813503 1 gather_workloads_info.go:366] No image sha256:43e426ac9df633be58006907aede6f9b6322c6cc7985cd43141ad7518847c637 (9ms) I0216 05:43:05.823280 1 gather_workloads_info.go:366] No image sha256:357821852af925e0c8a19df2f9fceec8d2e49f9d13575b86ecd3fbedce488afa (10ms) I0216 05:43:05.833387 1 gather_workloads_info.go:366] No image sha256:3958f525bae8ad011915244c9c8c1c2c750b761094046b2719fae36f6ac8903c (10ms) I0216 05:43:05.843203 1 gather_workloads_info.go:366] No image sha256:f82357030795138d2081ecc5172092222b0f4faea27e9a7a0474fbeae29111ad (10ms) I0216 05:43:05.853005 1 gather_workloads_info.go:366] No image sha256:f550296753e9898c67d563b7deb16ba540ca1367944c905415f35537b6b949d4 (10ms) I0216 05:43:05.864997 1 gather_workloads_info.go:366] No image sha256:586e9c2756f50e562a6123f47fe38dba5496b63413c3dd18e0b85d6167094f0c (12ms) I0216 05:43:05.964811 1 gather_workloads_info.go:366] No image sha256:185305b7da4ef5b90a90046f145e8c66bab3a16b12771d2e98bf78104d6a60f2 (100ms) I0216 05:43:06.065153 1 gather_workloads_info.go:366] No image sha256:2121717e0222b9e8892a44907b461a4f62b3f1e5429a0e2eee802d48d04fff30 (100ms) I0216 05:43:06.165369 1 gather_workloads_info.go:366] No image sha256:88e6cc2192e682bb9c4ac5aec8e41254696d909c5dc337e720b9ec165a728064 (100ms) I0216 05:43:06.264809 1 gather_workloads_info.go:366] No image sha256:29e41a505a942a77c0d5f954eb302c01921cb0c0d176066fe63f82f3e96e3923 (99ms) I0216 05:43:06.365240 1 gather_workloads_info.go:366] No image sha256:712ad2760c350db1e23b9393bdda83149452931dc7b5a5038a3bcdb4663917c0 (100ms) I0216 05:43:06.464093 1 gather_workloads_info.go:366] No image sha256:822db36f8e1353ac24785b88d1fb2150d3ef34a5e739c1f67b61079336e9798b (99ms) I0216 05:43:06.565338 1 gather_workloads_info.go:366] No image sha256:33d7e5c63340e93b5a063de538017ac693f154e3c27ee2ef8a8a53bb45583552 (101ms) I0216 05:43:06.664672 1 gather_workloads_info.go:366] No image sha256:64ef34275f7ea992f5a4739cf7a724e55806bfab0c752fc0eccc2f70dfecbaf4 (99ms) I0216 05:43:06.774213 1 gather_workloads_info.go:366] No image sha256:036e6f9a4609a7499f200032dac2294e4a2d98764464ed17453ef725f2f0264d (110ms) I0216 05:43:06.864855 1 gather_workloads_info.go:366] No image sha256:59f553035bc347fc7205f1c071897bc2606b98525d6b9a3aca62fc9cd7078a57 (91ms) I0216 05:43:06.967191 1 gather_workloads_info.go:366] No image sha256:c822bd444a7bc53b21afb9372ff0a24961b2687073f3563c127cce5803801b04 (102ms) I0216 05:43:07.065966 1 gather_workloads_info.go:366] No image sha256:2193d7361704b0ae4bca052e9158761e06ecbac9ca3f0a9c8f0f101127e8f370 (99ms) I0216 05:43:07.166785 1 gather_workloads_info.go:366] No image sha256:457372d9f22e1c726ea1a6fcc54ddca8335bd607d2c357bcd7b63a7017aa5c2b (101ms) I0216 05:43:07.267716 1 gather_workloads_info.go:366] No image sha256:5335f64616c3a6c55a9a6dc4bc084b46a4957fb4fc250afc5343e4547ebb3598 (101ms) I0216 05:43:07.365030 1 gather_workloads_info.go:366] No image sha256:27e725f1250f6a17da5eba7ada315a244592b5b822d61e95722bb7e2f884b00f (97ms) I0216 05:43:07.465597 1 gather_workloads_info.go:366] No image sha256:29d1672ef44c59d065737eca330075dd2f6da4ba743153973a739aa9e9d73ad3 (101ms) I0216 05:43:07.564945 1 gather_workloads_info.go:366] No image sha256:deffb0293fd11f5b40609aa9e80b16b0f90a9480013b2b7f61bd350bbd9b6f07 (99ms) I0216 05:43:07.665548 1 gather_workloads_info.go:366] No image sha256:91d9cb208e6d0c39a87dfe8276d162c75ff3fcd3b005b3e7b537f65c53475a42 (101ms) I0216 05:43:07.765417 1 gather_workloads_info.go:366] No image sha256:9cc55a501aaad1adbefdd573e57c2f756a3a6a8723c43052995be6389edf1fa8 (100ms) I0216 05:43:07.865234 1 gather_workloads_info.go:366] No image sha256:745f2186738a57bb1b484f68431e77aa2f68a1b8dcb434b1f7a4b429eafdf091 (100ms) I0216 05:43:07.964506 1 gather_workloads_info.go:366] No image sha256:7f55b7dbfb15fe36d83d64027eacee22fb00688ccbc03550cc2dbedfa633f288 (99ms) I0216 05:43:07.964535 1 tasks_processing.go:74] worker 0 stopped. I0216 05:43:07.964748 1 recorder.go:75] Recording config/workload_info with fingerprint=e061525da61d3b65ad3dafbd604e9685e2396e5e8f331cb08d8525fa5bc05097 I0216 05:43:07.964768 1 gather.go:180] gatherer "workloads" function "workload_info" took 2.232938564s to process 1 records I0216 05:43:07.964782 1 periodic.go:261] Periodic gather workloads completed in 2.233s I0216 05:43:07.964788 1 controllerstatus.go:80] name=periodic-workloads healthy=true reason= message= I0216 05:43:07.964793 1 periodic.go:214] Running conditional gatherer I0216 05:43:07.971188 1 requests.go:282] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules I0216 05:43:07.975903 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:37637->172.30.0.10:53: read: connection refused E0216 05:43:07.976124 1 conditional_gatherer.go:324] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 05:43:07.976193 1 conditional_gatherer.go:386] updating version cache for conditional gatherer I0216 05:43:07.984911 1 conditional_gatherer.go:394] cluster version is '4.17.48' E0216 05:43:07.984922 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984927 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984929 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984931 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984934 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984936 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984938 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984940 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 05:43:07.984942 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing I0216 05:43:07.984953 1 tasks_processing.go:45] number of workers: 3 I0216 05:43:07.984965 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 05:43:07.984968 1 tasks_processing.go:71] worker 2 working on remote_configuration task. I0216 05:43:07.984976 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 05:43:07.984986 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 05:43:07.984990 1 tasks_processing.go:71] worker 0 working on rapid_container_logs task. I0216 05:43:07.984992 1 tasks_processing.go:74] worker 1 stopped. I0216 05:43:07.984996 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0216 05:43:07.985008 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0216 05:43:07.985022 1 gather.go:180] gatherer "conditional" function "remote_configuration" took 590ns to process 1 records I0216 05:43:07.985071 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0216 05:43:07.985092 1 gather.go:180] gatherer "conditional" function "conditional_gatherer_rules" took 885ns to process 1 records I0216 05:43:07.985097 1 tasks_processing.go:74] worker 2 stopped. I0216 05:43:07.985161 1 tasks_processing.go:74] worker 0 stopped. I0216 05:43:07.985189 1 gather.go:180] gatherer "conditional" function "rapid_container_logs" took 166.635µs to process 0 records I0216 05:43:07.985212 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:37637->172.30.0.10:53: read: connection refused I0216 05:43:07.985223 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 W0216 05:43:08.010269 1 gather.go:212] can't read cgroups memory usage data: open /sys/fs/cgroup/memory/memory.usage_in_bytes: no such file or directory I0216 05:43:08.010375 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=63a57d21efbb18af21c4c7fa22947c0741af12d4a58e4fbd558865beaf0182eb I0216 05:43:08.010484 1 diskrecorder.go:70] Writing 110 records to /var/lib/insights-operator/insights-2026-02-16-054308.tar.gz I0216 05:43:08.017383 1 diskrecorder.go:51] Wrote 110 records to disk in 6ms I0216 05:43:08.017412 1 periodic.go:283] Gathering cluster info every 2h0m0s I0216 05:43:08.017425 1 periodic.go:284] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0216 05:43:15.650492 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 05:44:16.697779 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="cf527c917ae94f2024944f3bc8032030bf6332f55a9cbeb9b572ce7173e03ced") W0216 05:44:16.697810 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0216 05:44:16.697837 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="2004df35a43834f3eb70f58b03b4e8223724828fdfaf75e6ba8d235c8e131b96") I0216 05:44:16.697886 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="e26c5e41f822a07fe364f98c79cabdbb03e5beee14a61f762ee3ef0973d9552e") I0216 05:44:16.697924 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0216 05:44:16.697934 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0216 05:44:16.697978 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0216 05:44:16.698006 1 periodic.go:175] Shutting down I0216 05:44:16.698014 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0216 05:44:16.698014 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" I0216 05:44:16.698025 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController I0216 05:44:16.698034 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0216 05:44:16.698045 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" I0216 05:44:16.698054 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0216 05:44:16.698054 1 base_controller.go:172] Shutting down LoggingSyncer ... I0216 05:44:16.698068 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... E0216 05:44:16.698073 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled I0216 05:44:16.698078 1 base_controller.go:104] All LoggingSyncer workers have been terminated I0216 05:44:16.698082 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/serving-cert-3889205171/tls.crt::/tmp/serving-cert-3889205171/tls.key" I0216 05:44:16.698089 1 secure_serving.go:258] Stopped listening on [::]:8443 I0216 05:44:16.698088 1 base_controller.go:172] Shutting down ConfigController ... I0216 05:44:16.698102 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" I0216 05:44:16.698112 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting I0216 05:44:16.698119 1 builder.go:330] server exited