W0216 13:35:54.168907 1 cmd.go:245] Using insecure, self-signed certificates I0216 13:35:54.337718 1 start.go:223] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:35:54.337951 1 observer_polling.go:159] Starting file observer I0216 13:35:54.612323 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0216 13:35:54.612502 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0216 13:35:54.612811 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0216 13:35:54.613914 1 secure_serving.go:57] Forcing use of http/1.1 only W0216 13:35:54.613952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0216 13:35:54.614046 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0216 13:35:54.614056 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0216 13:35:54.614061 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0216 13:35:54.614194 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0216 13:35:54.614198 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0216 13:35:54.617585 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BootcNodeManagement BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere ClusterMonitoringConfig DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation MultiArchInstallAWS MultiArchInstallAzure MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NodeDisruptionPolicy NodeSwap OVNObservability OnClusterBuild OpenShiftPodSecurityAdmission PersistentIPsForVirtualization PinnedImages PlatformOperators PrivateHostedZoneAWS ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SignatureStores SigstoreImageVerification StreamingCollectionEncodingToJSON StreamingCollectionEncodingToProtobuf TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesSupport VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] I0216 13:35:54.617604 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"78eb3982-abbb-45a4-8131-79b4a1a56fb6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallPowerVS", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "ExternalOIDC", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "GCPClusterHostedDNS", "GatewayAPI", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesSupport", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} I0216 13:35:54.619376 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0216 13:35:54.619386 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController I0216 13:35:54.619385 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0216 13:35:54.619395 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 13:35:54.619396 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0216 13:35:54.619406 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0216 13:35:54.619673 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2685713810/tls.crt::/tmp/serving-cert-2685713810/tls.key" I0216 13:35:54.619778 1 secure_serving.go:213] Serving securely on [::]:8443 I0216 13:35:54.619806 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0216 13:35:54.622531 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0216 13:35:54.622551 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 13:35:54.622603 1 base_controller.go:67] Waiting for caches to sync for ConfigController I0216 13:35:54.627305 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 13:35:54.627329 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0216 13:35:54.631509 1 secretconfigobserver.go:119] support secret does not exist I0216 13:35:54.636091 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0216 13:35:54.640179 1 secretconfigobserver.go:119] support secret does not exist I0216 13:35:54.643458 1 recorder.go:161] Pruning old reports every 6h10m44s, max age is 288h0m0s I0216 13:35:54.648563 1 periodic.go:214] Running clusterconfig gatherer I0216 13:35:54.648566 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0216 13:35:54.648579 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0216 13:35:54.648594 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0216 13:35:54.648601 1 insightsreport.go:296] Starting report retriever I0216 13:35:54.648604 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0216 13:35:54.648595 1 tasks_processing.go:45] number of workers: 64 I0216 13:35:54.648632 1 tasks_processing.go:69] worker 11 listening for tasks. I0216 13:35:54.648644 1 tasks_processing.go:69] worker 63 listening for tasks. I0216 13:35:54.648644 1 tasks_processing.go:71] worker 11 working on openstack_controlplanes task. I0216 13:35:54.648649 1 tasks_processing.go:69] worker 12 listening for tasks. I0216 13:35:54.648655 1 tasks_processing.go:69] worker 13 listening for tasks. I0216 13:35:54.648658 1 tasks_processing.go:69] worker 14 listening for tasks. I0216 13:35:54.648662 1 tasks_processing.go:69] worker 39 listening for tasks. I0216 13:35:54.648666 1 tasks_processing.go:69] worker 40 listening for tasks. I0216 13:35:54.648667 1 tasks_processing.go:69] worker 38 listening for tasks. I0216 13:35:54.648671 1 tasks_processing.go:69] worker 41 listening for tasks. I0216 13:35:54.648673 1 tasks_processing.go:69] worker 15 listening for tasks. I0216 13:35:54.648675 1 tasks_processing.go:69] worker 53 listening for tasks. I0216 13:35:54.648678 1 tasks_processing.go:69] worker 16 listening for tasks. I0216 13:35:54.648679 1 tasks_processing.go:69] worker 54 listening for tasks. I0216 13:35:54.648678 1 tasks_processing.go:69] worker 52 listening for tasks. I0216 13:35:54.648696 1 tasks_processing.go:69] worker 17 listening for tasks. I0216 13:35:54.648699 1 tasks_processing.go:69] worker 55 listening for tasks. I0216 13:35:54.648700 1 tasks_processing.go:69] worker 42 listening for tasks. I0216 13:35:54.648703 1 tasks_processing.go:69] worker 18 listening for tasks. I0216 13:35:54.648705 1 tasks_processing.go:69] worker 43 listening for tasks. I0216 13:35:54.648705 1 tasks_processing.go:69] worker 60 listening for tasks. I0216 13:35:54.648708 1 tasks_processing.go:69] worker 19 listening for tasks. I0216 13:35:54.648709 1 tasks_processing.go:69] worker 61 listening for tasks. I0216 13:35:54.648711 1 tasks_processing.go:69] worker 20 listening for tasks. I0216 13:35:54.648712 1 tasks_processing.go:69] worker 62 listening for tasks. I0216 13:35:54.648710 1 tasks_processing.go:69] worker 59 listening for tasks. I0216 13:35:54.648715 1 tasks_processing.go:69] worker 44 listening for tasks. I0216 13:35:54.648718 1 tasks_processing.go:69] worker 29 listening for tasks. I0216 13:35:54.648719 1 tasks_processing.go:69] worker 30 listening for tasks. I0216 13:35:54.648719 1 tasks_processing.go:69] worker 56 listening for tasks. I0216 13:35:54.648724 1 tasks_processing.go:69] worker 21 listening for tasks. I0216 13:35:54.648724 1 tasks_processing.go:69] worker 46 listening for tasks. I0216 13:35:54.648726 1 tasks_processing.go:69] worker 31 listening for tasks. I0216 13:35:54.648720 1 tasks_processing.go:69] worker 45 listening for tasks. I0216 13:35:54.648724 1 tasks_processing.go:69] worker 5 listening for tasks. I0216 13:35:54.648732 1 tasks_processing.go:69] worker 27 listening for tasks. I0216 13:35:54.648729 1 tasks_processing.go:69] worker 57 listening for tasks. I0216 13:35:54.648737 1 tasks_processing.go:69] worker 28 listening for tasks. I0216 13:35:54.648731 1 tasks_processing.go:69] worker 32 listening for tasks. I0216 13:35:54.648736 1 tasks_processing.go:69] worker 58 listening for tasks. I0216 13:35:54.648740 1 tasks_processing.go:69] worker 34 listening for tasks. I0216 13:35:54.648735 1 tasks_processing.go:69] worker 47 listening for tasks. I0216 13:35:54.648734 1 tasks_processing.go:69] worker 33 listening for tasks. I0216 13:35:54.648745 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 13:35:54.648747 1 tasks_processing.go:69] worker 22 listening for tasks. I0216 13:35:54.648749 1 tasks_processing.go:69] worker 37 listening for tasks. I0216 13:35:54.648740 1 tasks_processing.go:69] worker 48 listening for tasks. I0216 13:35:54.648752 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 13:35:54.648738 1 tasks_processing.go:69] worker 25 listening for tasks. I0216 13:35:54.648754 1 tasks_processing.go:69] worker 35 listening for tasks. I0216 13:35:54.648728 1 tasks_processing.go:69] worker 26 listening for tasks. I0216 13:35:54.648753 1 tasks_processing.go:69] worker 50 listening for tasks. I0216 13:35:54.648756 1 tasks_processing.go:69] worker 4 listening for tasks. I0216 13:35:54.648757 1 tasks_processing.go:69] worker 51 listening for tasks. I0216 13:35:54.648761 1 tasks_processing.go:69] worker 6 listening for tasks. I0216 13:35:54.648758 1 tasks_processing.go:69] worker 36 listening for tasks. I0216 13:35:54.648763 1 tasks_processing.go:69] worker 9 listening for tasks. I0216 13:35:54.648764 1 tasks_processing.go:69] worker 10 listening for tasks. I0216 13:35:54.648744 1 tasks_processing.go:69] worker 23 listening for tasks. I0216 13:35:54.648740 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 13:35:54.648747 1 tasks_processing.go:69] worker 49 listening for tasks. I0216 13:35:54.648747 1 tasks_processing.go:69] worker 24 listening for tasks. I0216 13:35:54.648757 1 tasks_processing.go:69] worker 8 listening for tasks. I0216 13:35:54.648759 1 tasks_processing.go:69] worker 7 listening for tasks. I0216 13:35:54.648783 1 tasks_processing.go:69] worker 3 listening for tasks. I0216 13:35:54.648827 1 tasks_processing.go:71] worker 17 working on schedulers task. I0216 13:35:54.648831 1 tasks_processing.go:71] worker 42 working on ingress task. I0216 13:35:54.648832 1 tasks_processing.go:71] worker 53 working on image task. I0216 13:35:54.648835 1 tasks_processing.go:71] worker 3 working on support_secret task. I0216 13:35:54.648834 1 tasks_processing.go:71] worker 38 working on machine_autoscalers task. I0216 13:35:54.648846 1 tasks_processing.go:71] worker 18 working on storage_cluster task. I0216 13:35:54.648848 1 tasks_processing.go:71] worker 60 working on dvo_metrics task. I0216 13:35:54.648861 1 tasks_processing.go:71] worker 13 working on openshift_logging task. I0216 13:35:54.648868 1 tasks_processing.go:71] worker 43 working on machine_sets task. I0216 13:35:54.648827 1 tasks_processing.go:71] worker 20 working on ingress_certificates task. I0216 13:35:54.648829 1 tasks_processing.go:71] worker 55 working on container_images task. I0216 13:35:54.648881 1 tasks_processing.go:71] worker 39 working on machine_config_pools task. I0216 13:35:54.648894 1 tasks_processing.go:71] worker 63 working on kube_controller_manager_logs task. I0216 13:35:54.648910 1 tasks_processing.go:71] worker 1 working on openshift_apiserver_operator_logs task. I0216 13:35:54.648928 1 tasks_processing.go:71] worker 41 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0216 13:35:54.648939 1 tasks_processing.go:71] worker 51 working on qemu_kubevirt_launcher_logs task. I0216 13:35:54.648950 1 tasks_processing.go:71] worker 40 working on sap_pods task. I0216 13:35:54.648977 1 tasks_processing.go:71] worker 12 working on nodenetworkstates task. I0216 13:35:54.648991 1 tasks_processing.go:71] worker 61 working on mutating_webhook_configurations task. I0216 13:35:54.648997 1 tasks_processing.go:71] worker 15 working on nodenetworkconfigurationpolicies task. I0216 13:35:54.649004 1 tasks_processing.go:71] worker 14 working on aggregated_monitoring_cr_names task. I0216 13:35:54.649022 1 tasks_processing.go:71] worker 19 working on install_plans task. I0216 13:35:54.649068 1 tasks_processing.go:71] worker 6 working on olm_operators task. I0216 13:35:54.649111 1 tasks_processing.go:71] worker 45 working on clusterroles task. I0216 13:35:54.649158 1 tasks_processing.go:71] worker 16 working on networks task. I0216 13:35:54.649168 1 tasks_processing.go:71] worker 25 working on container_runtime_configs task. I0216 13:35:54.649172 1 tasks_processing.go:71] worker 54 working on storage_classes task. I0216 13:35:54.649177 1 tasks_processing.go:71] worker 36 working on jaegers task. I0216 13:35:54.649179 1 tasks_processing.go:71] worker 32 working on pod_network_connectivity_checks task. I0216 13:35:54.649186 1 tasks_processing.go:71] worker 0 working on monitoring_persistent_volumes task. I0216 13:35:54.649191 1 tasks_processing.go:71] worker 35 working on image_registries task. I0216 13:35:54.649195 1 tasks_processing.go:71] worker 58 working on cost_management_metrics_configs task. I0216 13:35:54.649210 1 tasks_processing.go:71] worker 52 working on sap_datahubs task. I0216 13:35:54.649219 1 tasks_processing.go:71] worker 26 working on pdbs task. I0216 13:35:54.649250 1 tasks_processing.go:71] worker 37 working on config_maps task. I0216 13:35:54.649289 1 tasks_processing.go:71] worker 29 working on operators task. I0216 13:35:54.649261 1 tasks_processing.go:71] worker 48 working on ceph_cluster task. I0216 13:35:54.649264 1 tasks_processing.go:71] worker 2 working on authentication task. I0216 13:35:54.649270 1 tasks_processing.go:71] worker 10 working on cluster_apiserver task. I0216 13:35:54.649274 1 tasks_processing.go:71] worker 5 working on lokistack task. I0216 13:35:54.649277 1 tasks_processing.go:71] worker 49 working on silenced_alerts task. W0216 13:35:54.649594 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:35:54.649603 1 tasks_processing.go:71] worker 49 working on sap_config task. I0216 13:35:54.649276 1 tasks_processing.go:71] worker 9 working on tsdb_status task. W0216 13:35:54.649620 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:35:54.649278 1 tasks_processing.go:71] worker 27 working on infrastructures task. I0216 13:35:54.649632 1 gather.go:180] gatherer "clusterconfig" function "silenced_alerts" took 16.106µs to process 0 records I0216 13:35:54.649641 1 gather.go:180] gatherer "clusterconfig" function "tsdb_status" took 17.771µs to process 0 records I0216 13:35:54.649276 1 tasks_processing.go:71] worker 4 working on openstack_dataplanenodesets task. I0216 13:35:54.649702 1 tasks_processing.go:71] worker 9 working on oauths task. I0216 13:35:54.649281 1 tasks_processing.go:71] worker 30 working on metrics task. W0216 13:35:54.649731 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:35:54.649735 1 tasks_processing.go:71] worker 30 working on feature_gates task. I0216 13:35:54.649281 1 tasks_processing.go:71] worker 24 working on openshift_machine_api_events task. I0216 13:35:54.649778 1 gather.go:180] gatherer "clusterconfig" function "metrics" took 9.965µs to process 0 records I0216 13:35:54.649282 1 tasks_processing.go:71] worker 57 working on openshift_authentication_logs task. I0216 13:35:54.649284 1 tasks_processing.go:71] worker 59 working on machines task. I0216 13:35:54.649285 1 tasks_processing.go:71] worker 44 working on node_logs task. I0216 13:35:54.649286 1 tasks_processing.go:71] worker 8 working on image_pruners task. I0216 13:35:54.649286 1 tasks_processing.go:71] worker 28 working on active_alerts task. W0216 13:35:54.650140 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:35:54.650148 1 tasks_processing.go:71] worker 28 working on sap_license_management_logs task. I0216 13:35:54.649288 1 tasks_processing.go:71] worker 62 working on version task. I0216 13:35:54.650202 1 gather.go:180] gatherer "clusterconfig" function "active_alerts" took 18.011µs to process 0 records I0216 13:35:54.649289 1 tasks_processing.go:71] worker 7 working on machine_healthchecks task. I0216 13:35:54.649294 1 tasks_processing.go:71] worker 34 working on crds task. I0216 13:35:54.649292 1 tasks_processing.go:71] worker 21 working on nodes task. I0216 13:35:54.649293 1 tasks_processing.go:71] worker 56 working on service_accounts task. I0216 13:35:54.649295 1 tasks_processing.go:71] worker 31 working on operators_pods_and_events task. I0216 13:35:54.649298 1 tasks_processing.go:71] worker 33 working on scheduler_logs task. I0216 13:35:54.649299 1 tasks_processing.go:71] worker 46 working on validating_webhook_configurations task. I0216 13:35:54.649298 1 tasks_processing.go:71] worker 47 working on proxies task. I0216 13:35:54.649300 1 tasks_processing.go:71] worker 23 working on openstack_version task. I0216 13:35:54.649214 1 tasks_processing.go:71] worker 50 working on openstack_dataplanedeployments task. I0216 13:35:54.649304 1 tasks_processing.go:71] worker 22 working on machine_configs task. I0216 13:35:54.651947 1 tasks_processing.go:71] worker 11 working on certificate_signing_requests task. I0216 13:35:54.651956 1 gather.go:180] gatherer "clusterconfig" function "openstack_controlplanes" took 3.293064ms to process 0 records I0216 13:35:54.657717 1 tasks_processing.go:71] worker 13 working on overlapping_namespace_uids task. I0216 13:35:54.657731 1 gather.go:180] gatherer "clusterconfig" function "openshift_logging" took 8.844153ms to process 0 records I0216 13:35:54.657737 1 gather.go:180] gatherer "clusterconfig" function "machine_autoscalers" took 8.884243ms to process 0 records I0216 13:35:54.657742 1 tasks_processing.go:74] worker 38 stopped. I0216 13:35:54.657753 1 tasks_processing.go:74] worker 39 stopped. I0216 13:35:54.657763 1 gather.go:180] gatherer "clusterconfig" function "machine_config_pools" took 8.864706ms to process 0 records I0216 13:35:54.657768 1 gather.go:180] gatherer "clusterconfig" function "storage_cluster" took 8.900575ms to process 0 records I0216 13:35:54.657771 1 tasks_processing.go:74] worker 18 stopped. I0216 13:35:54.664503 1 tasks_processing.go:74] worker 12 stopped. I0216 13:35:54.664514 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkstates" took 15.516507ms to process 0 records I0216 13:35:54.664522 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 15.51542ms to process 0 records I0216 13:35:54.664527 1 tasks_processing.go:74] worker 15 stopped. I0216 13:35:54.664534 1 tasks_processing.go:74] worker 43 stopped. I0216 13:35:54.664547 1 gather.go:180] gatherer "clusterconfig" function "machine_sets" took 15.65711ms to process 0 records I0216 13:35:54.664850 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0216 13:35:54.664882 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 13:35:54.664887 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 13:35:54.664891 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 13:35:54.664916 1 controller.go:457] The operator is still being initialized I0216 13:35:54.664957 1 controller.go:482] The operator is healthy I0216 13:35:54.664955 1 tasks_processing.go:74] worker 42 stopped. I0216 13:35:54.665310 1 recorder.go:75] Recording config/ingress with fingerprint=f9c81d57c4a143f501a97c45f91b9c47e3a470eafbf5df7996af467564119c5c I0216 13:35:54.665359 1 gather.go:180] gatherer "clusterconfig" function "ingress" took 16.1168ms to process 1 records I0216 13:35:54.680043 1 tasks_processing.go:74] worker 40 stopped. I0216 13:35:54.680046 1 gather_logs.go:145] no pods in openshift-kube-controller-manager namespace were found I0216 13:35:54.680055 1 gather.go:180] gatherer "clusterconfig" function "sap_pods" took 31.085888ms to process 0 records I0216 13:35:54.680060 1 gather.go:180] gatherer "clusterconfig" function "kube_controller_manager_logs" took 31.149575ms to process 0 records I0216 13:35:54.680065 1 tasks_processing.go:74] worker 63 stopped. I0216 13:35:54.680141 1 tasks_processing.go:74] worker 17 stopped. I0216 13:35:54.680203 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=787de2abf32705ec41c9d60f0a8a4a09034979698109d6d273338acfc1b16104 I0216 13:35:54.680213 1 gather.go:180] gatherer "clusterconfig" function "schedulers" took 31.306033ms to process 1 records I0216 13:35:54.680217 1 gather.go:180] gatherer "clusterconfig" function "container_runtime_configs" took 30.980384ms to process 0 records E0216 13:35:54.680221 1 gather.go:143] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0216 13:35:54.680226 1 gather.go:180] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 30.973417ms to process 0 records E0216 13:35:54.680229 1 gather.go:143] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0216 13:35:54.680234 1 gather.go:180] gatherer "clusterconfig" function "support_secret" took 31.340762ms to process 0 records I0216 13:35:54.680235 1 tasks_processing.go:74] worker 25 stopped. I0216 13:35:54.680242 1 tasks_processing.go:74] worker 3 stopped. I0216 13:35:54.680247 1 tasks_processing.go:74] worker 32 stopped. I0216 13:35:54.680313 1 tasks_processing.go:74] worker 6 stopped. I0216 13:35:54.680328 1 gather.go:180] gatherer "clusterconfig" function "olm_operators" took 31.231423ms to process 0 records I0216 13:35:54.680440 1 tasks_processing.go:74] worker 58 stopped. I0216 13:35:54.680451 1 gather.go:180] gatherer "clusterconfig" function "cost_management_metrics_configs" took 31.237741ms to process 0 records I0216 13:35:54.680462 1 gather.go:180] gatherer "clusterconfig" function "jaegers" took 31.277941ms to process 0 records I0216 13:35:54.680467 1 tasks_processing.go:74] worker 36 stopped. I0216 13:35:54.680575 1 tasks_processing.go:74] worker 48 stopped. I0216 13:35:54.680583 1 gather.go:180] gatherer "clusterconfig" function "ceph_cluster" took 31.170925ms to process 0 records I0216 13:35:54.680736 1 tasks_processing.go:74] worker 5 stopped. I0216 13:35:54.680743 1 gather.go:180] gatherer "clusterconfig" function "lokistack" took 31.155017ms to process 0 records I0216 13:35:54.680747 1 gather.go:180] gatherer "clusterconfig" function "sap_config" took 31.132796ms to process 0 records I0216 13:35:54.680750 1 gather.go:180] gatherer "clusterconfig" function "sap_datahubs" took 31.520828ms to process 0 records I0216 13:35:54.680753 1 tasks_processing.go:74] worker 52 stopped. I0216 13:35:54.680755 1 tasks_processing.go:74] worker 49 stopped. I0216 13:35:54.680965 1 tasks_processing.go:74] worker 53 stopped. I0216 13:35:54.681035 1 recorder.go:75] Recording config/image with fingerprint=26174f61b22c386d7cb99e9c5fa43313add41412a155548c2561370012e20f01 I0216 13:35:54.681046 1 gather.go:180] gatherer "clusterconfig" function "image" took 32.12021ms to process 1 records I0216 13:35:54.681089 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=dde4638a75801094e424c13aa4f09611faa11788b9d6a1d7e3e80f985d087830 I0216 13:35:54.681094 1 gather_logs.go:145] no pods in openshift-apiserver-operator namespace were found I0216 13:35:54.681099 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=c9c66bf73c907d7345758c2cbe28ae373f59582b3400174f3d645b68dacc9b75 I0216 13:35:54.681103 1 gather.go:180] gatherer "clusterconfig" function "storage_classes" took 31.791568ms to process 2 records I0216 13:35:54.681154 1 tasks_processing.go:74] worker 54 stopped. I0216 13:35:54.681173 1 recorder.go:75] Recording config/network with fingerprint=9537aa4d63c257b267295c91a5e1d84e463753205aef495e7ddc65911b76919a I0216 13:35:54.681179 1 gather.go:180] gatherer "clusterconfig" function "networks" took 31.898388ms to process 1 records I0216 13:35:54.681182 1 gather.go:180] gatherer "clusterconfig" function "openshift_apiserver_operator_logs" took 32.175608ms to process 0 records I0216 13:35:54.681186 1 tasks_processing.go:74] worker 1 stopped. I0216 13:35:54.681189 1 tasks_processing.go:74] worker 16 stopped. I0216 13:35:54.681229 1 tasks_processing.go:74] worker 27 stopped. I0216 13:35:54.681628 1 recorder.go:75] Recording config/infrastructure with fingerprint=66c582850fe9cd8155fb36ee2bf61bea92be62afba689258b54b9df74f00826f I0216 13:35:54.681643 1 gather.go:180] gatherer "clusterconfig" function "infrastructures" took 31.597163ms to process 1 records W0216 13:35:54.683451 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 13:35:54.687196 1 tasks_processing.go:74] worker 23 stopped. I0216 13:35:54.687206 1 gather.go:180] gatherer "clusterconfig" function "openstack_version" took 36.28852ms to process 0 records I0216 13:35:54.687217 1 tasks_processing.go:74] worker 50 stopped. I0216 13:35:54.687227 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 36.240293ms to process 0 records I0216 13:35:54.687240 1 gather.go:180] gatherer "clusterconfig" function "machine_configs" took 36.222449ms to process 0 records I0216 13:35:54.687245 1 tasks_processing.go:74] worker 22 stopped. I0216 13:35:54.687230 1 gather_logs.go:145] no pods in namespace were found I0216 13:35:54.687251 1 tasks_processing.go:74] worker 51 stopped. I0216 13:35:54.687256 1 gather.go:180] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 38.301559ms to process 0 records I0216 13:35:54.687421 1 tasks_processing.go:74] worker 26 stopped. I0216 13:35:54.687509 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=a2f3bcfcd0f48d71a9ea86ef3a0ac82c8168d52f5eee5b7a01c4138d87e69cf2 I0216 13:35:54.687530 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=1c679d76b77ada6913b07502d8f31107e7751f22a12496fe0958e5434e708ac3 I0216 13:35:54.687553 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=3768cf6f0ffc18bcbbb2276d1cc9982a091b444d68a8a4d4e50d6795fad80fa1 I0216 13:35:54.687563 1 gather.go:180] gatherer "clusterconfig" function "pdbs" took 38.190207ms to process 3 records I0216 13:35:54.687569 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 37.839099ms to process 0 records I0216 13:35:54.687574 1 gather.go:180] gatherer "clusterconfig" function "openshift_machine_api_events" took 37.768452ms to process 0 records I0216 13:35:54.687580 1 tasks_processing.go:74] worker 24 stopped. I0216 13:35:54.687584 1 tasks_processing.go:74] worker 4 stopped. I0216 13:35:54.687605 1 gather_logs.go:145] no pods in openshift-authentication namespace were found I0216 13:35:54.687616 1 tasks_processing.go:74] worker 57 stopped. I0216 13:35:54.687623 1 gather.go:180] gatherer "clusterconfig" function "openshift_authentication_logs" took 37.764319ms to process 0 records I0216 13:35:54.687635 1 tasks_processing.go:74] worker 61 stopped. I0216 13:35:54.687798 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=a57da89e551d8a7aa40968237fd27cb0e95d9eb7897d01e2e217dfac154114dd I0216 13:35:54.687808 1 gather_sap_vsystem_iptables_logs.go:60] SAP resources weren't found I0216 13:35:54.687820 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=46d3f8980a6d1e190be2cb0f6cb6f2fb94b601fd5fe14befd1db10faa02e1738 I0216 13:35:54.687832 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=ddf0a9949ac24726a05b65987590e563692bfc15986a573a5e249d46165e4525 I0216 13:35:54.687840 1 gather.go:180] gatherer "clusterconfig" function "mutating_webhook_configurations" took 38.638499ms to process 3 records E0216 13:35:54.687850 1 gather.go:143] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0216 13:35:54.687859 1 tasks_processing.go:74] worker 7 stopped. I0216 13:35:54.687860 1 gather.go:180] gatherer "clusterconfig" function "machine_healthchecks" took 37.387058ms to process 0 records E0216 13:35:54.687867 1 gather.go:143] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0216 13:35:54.687871 1 gather.go:180] gatherer "clusterconfig" function "machines" took 37.830429ms to process 0 records I0216 13:35:54.687874 1 gather.go:180] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 38.59611ms to process 0 records I0216 13:35:54.687896 1 tasks_processing.go:74] worker 59 stopped. I0216 13:35:54.687904 1 tasks_processing.go:74] worker 0 stopped. I0216 13:35:54.687972 1 tasks_processing.go:74] worker 9 stopped. I0216 13:35:54.688099 1 recorder.go:75] Recording config/oauth with fingerprint=ae3768bd04da52efcf5245406024e7f965a23a810275728625e800de5d89168c I0216 13:35:54.688111 1 gather.go:180] gatherer "clusterconfig" function "oauths" took 38.083053ms to process 1 records I0216 13:35:54.688120 1 gather.go:180] gatherer "clusterconfig" function "sap_license_management_logs" took 37.661941ms to process 0 records I0216 13:35:54.688132 1 tasks_processing.go:74] worker 28 stopped. I0216 13:35:54.688203 1 tasks_processing.go:74] worker 8 stopped. I0216 13:35:54.688288 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=c85991185df4d8d1021dcbe8f5fc3fcc552ad599fdc6d37842ec3cdef9cf8d57 I0216 13:35:54.688296 1 gather.go:180] gatherer "clusterconfig" function "image_pruners" took 37.852113ms to process 1 records I0216 13:35:54.688375 1 tasks_processing.go:74] worker 2 stopped. I0216 13:35:54.688428 1 recorder.go:75] Recording config/authentication with fingerprint=53322a220353acf6276839edbe1f05f9c8d635be4676931da0324be2a7347eec I0216 13:35:54.688437 1 gather.go:180] gatherer "clusterconfig" function "authentication" took 38.576231ms to process 1 records I0216 13:35:54.688471 1 recorder.go:75] Recording config/proxy with fingerprint=5e7e0518a0f6e6487db8586957bdf051d0545821234b519d85189866a9ab83d3 I0216 13:35:54.688478 1 gather.go:180] gatherer "clusterconfig" function "proxies" took 37.288456ms to process 1 records I0216 13:35:54.688514 1 tasks_processing.go:74] worker 47 stopped. I0216 13:35:54.688583 1 tasks_processing.go:74] worker 10 stopped. I0216 13:35:54.688704 1 recorder.go:75] Recording config/apiserver with fingerprint=9cc8aefb7948459292bb76267e242985a4e541e077f50fe81ee11931ccff4b6a I0216 13:35:54.688715 1 gather.go:180] gatherer "clusterconfig" function "cluster_apiserver" took 38.823928ms to process 1 records I0216 13:35:54.690040 1 gather_logs.go:145] no pods in openshift-kube-scheduler namespace were found I0216 13:35:54.690055 1 tasks_processing.go:74] worker 33 stopped. I0216 13:35:54.690065 1 gather.go:180] gatherer "clusterconfig" function "scheduler_logs" took 39.345591ms to process 0 records I0216 13:35:54.690173 1 tasks_processing.go:74] worker 44 stopped. I0216 13:35:54.690188 1 gather.go:180] gatherer "clusterconfig" function "node_logs" took 40.187329ms to process 0 records I0216 13:35:54.690311 1 tasks_processing.go:74] worker 46 stopped. I0216 13:35:54.690433 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=ab3fa8edc3c84cbeaf3b3ceb5819aee7f82718e4efb720750a63a4815f1681de I0216 13:35:54.690493 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=799343318696513636100f3df574c688f545ff08503c0847d0a84619401a22c0 I0216 13:35:54.690520 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=899ac3609eef6823f0610c6379aff4c02766f92cace4721c7748a137f2fd2a58 I0216 13:35:54.690544 1 recorder.go:75] Recording config/validatingwebhookconfigurations/snapshot.storage.k8s.io with fingerprint=8cb713736086a0643272aad0c17c5fdde3da7ad207703ec6dcfd357a439abbaf I0216 13:35:54.690607 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=1a63b5b637baab5b27c5cc989c86f016ad32de82516cf752b363318c62d3d985 I0216 13:35:54.690648 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=e835ba11b9f10f88259a0c73f55849b2c62ef65724990003761169c83623757e I0216 13:35:54.690674 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=762bfefd8c0bffab9d0c71e7f1c8e9879c6983951160830ac545c13527261330 I0216 13:35:54.690718 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=3e5c01da7c4e3af36a308f40441d1c83fbdd64dddaa13d38c6bfbfe9cde16a79 I0216 13:35:54.690751 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=626014a32b12cfce91d4adc7e13f08322ccb3d0bd2769124b66224f9043f2dfb I0216 13:35:54.690786 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=fa4d35e9bcaf96fbaa6a3a92ea4344810bead733e3de15ed7e1d35bb11686dcf I0216 13:35:54.690825 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=58063f4a48f097378d2eac67b1372430ef0d556b1f5f93dfffde119f781edab2 I0216 13:35:54.690869 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=2bc8f1c79bbcbb39b5e0296ff46e02ee20105709399ef05597dacbe4a14e9062 I0216 13:35:54.690890 1 gather.go:180] gatherer "clusterconfig" function "validating_webhook_configurations" took 39.538097ms to process 12 records I0216 13:35:54.691068 1 tasks_processing.go:74] worker 30 stopped. I0216 13:35:54.691397 1 recorder.go:75] Recording config/featuregate with fingerprint=e2d51257f7ef2ec59afd0a5b81bf70290cea5ae1340fc9be79fbf0223ab89f87 I0216 13:35:54.691413 1 gather.go:180] gatherer "clusterconfig" function "feature_gates" took 40.576243ms to process 1 records I0216 13:35:54.691457 1 tasks_processing.go:74] worker 35 stopped. I0216 13:35:54.692612 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=ec1d86ce5ac8984ac69ce4839dfdb9b36cbd2b168d8d418338713f9b0fa58229 I0216 13:35:54.692634 1 gather.go:180] gatherer "clusterconfig" function "image_registries" took 42.050382ms to process 1 records I0216 13:35:54.692644 1 gather.go:180] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 43.528697ms to process 0 records I0216 13:35:54.692652 1 tasks_processing.go:74] worker 14 stopped. I0216 13:35:54.692740 1 tasks_processing.go:74] worker 21 stopped. I0216 13:35:54.693047 1 recorder.go:75] Recording config/node/ip-10-0-132-39.ec2.internal with fingerprint=b5c6c79b359d49b095ac0ef44f6915681e9388174c5fed2c8623f6d6fc58b332 I0216 13:35:54.693091 1 recorder.go:75] Recording config/node/ip-10-0-149-57.ec2.internal with fingerprint=2834afcff15e7f8c26b3600e026ed5fe9e1d53f9587f4cf2df966b2b1fb15c3a I0216 13:35:54.693126 1 recorder.go:75] Recording config/node/ip-10-0-163-180.ec2.internal with fingerprint=d0df7ce8d549923391db0f66ad59b1bc1039779a0fadb58882a18afe1a364490 I0216 13:35:54.693133 1 gather.go:180] gatherer "clusterconfig" function "nodes" took 42.300614ms to process 3 records I0216 13:35:54.693182 1 tasks_processing.go:74] worker 13 stopped. I0216 13:35:54.693203 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0216 13:35:54.693212 1 gather.go:180] gatherer "clusterconfig" function "overlapping_namespace_uids" took 35.44919ms to process 1 records I0216 13:35:54.693479 1 tasks_processing.go:74] worker 11 stopped. I0216 13:35:54.693498 1 gather.go:180] gatherer "clusterconfig" function "certificate_signing_requests" took 41.514865ms to process 0 records I0216 13:35:54.701202 1 tasks_processing.go:74] worker 55 stopped. I0216 13:35:54.702292 1 sca.go:98] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/certificates. Next check is in 8h0m0s I0216 13:35:54.702372 1 cluster_transfer.go:78] checking the availability of cluster transfer. Next check is in 12h0m0s I0216 13:35:54.702397 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer W0216 13:35:54.702378 1 operator.go:286] started I0216 13:35:54.702448 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-g6qzc with fingerprint=a1f7dd4027208c98569d6c1d268644b0303aea576bdeb1178c56590d827542a8 I0216 13:35:54.702562 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-m6bt6 with fingerprint=c74ab2fc86774c606fdf8f674348fe68fd55ab49a978041318294570d0b9b39d I0216 13:35:54.702614 1 recorder.go:75] Recording config/running_containers with fingerprint=2fb69109c3302cbc18426c17d7f981991790800e3290aac20d0b47dd12d111e5 I0216 13:35:54.702622 1 gather.go:180] gatherer "clusterconfig" function "container_images" took 52.313038ms to process 3 records I0216 13:35:54.702992 1 tasks_processing.go:74] worker 41 stopped. I0216 13:35:54.703005 1 gather.go:180] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 54.045301ms to process 0 records I0216 13:35:54.703801 1 tasks_processing.go:74] worker 45 stopped. I0216 13:35:54.704003 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=a04591a700c85c45ccaf7ec9c5cdd3a693f9983af598de0ed53f202daa4e45bb I0216 13:35:54.704088 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=0fdbc806eaa1c6ed45ada9200b26c7f531ebd152c50d6515ec3c786a4af1ffbc I0216 13:35:54.704099 1 gather.go:180] gatherer "clusterconfig" function "clusterroles" took 54.67458ms to process 2 records I0216 13:35:54.711312 1 tasks_processing.go:74] worker 62 stopped. I0216 13:35:54.711399 1 controller.go:203] Source scaController *sca.Controller is not ready I0216 13:35:54.711408 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0216 13:35:54.711411 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0216 13:35:54.711414 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0216 13:35:54.711416 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0216 13:35:54.711429 1 controller.go:457] The operator is still being initialized I0216 13:35:54.711435 1 controller.go:482] The operator is healthy I0216 13:35:54.711466 1 recorder.go:75] Recording config/version with fingerprint=7afd514c66bfbb6bec552ce9e4be62b5981cc164d5c51e60b313b608d9ed3fb4 I0216 13:35:54.711476 1 recorder.go:75] Recording config/id with fingerprint=87ae7d18bbcf413768f8444f05600560f255fe2768f627bee0f8a5fe9b8b58d1 I0216 13:35:54.711481 1 gather.go:180] gatherer "clusterconfig" function "version" took 61.102767ms to process 2 records I0216 13:35:54.712079 1 tasks_processing.go:74] worker 34 stopped. I0216 13:35:54.712452 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=65f301761445431f73cca4715db5899dc898efe74b0d9ffe4efcaa1a5a0fadd8 I0216 13:35:54.712607 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=f3ec3f77931123d7d02cd6c260dee22a3df5cdce253779765802e3d425abbc26 I0216 13:35:54.712616 1 gather.go:180] gatherer "clusterconfig" function "crds" took 61.668259ms to process 2 records E0216 13:35:54.717591 1 cluster_transfer.go:90] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%!(NOVERB) I0216 13:35:54.717603 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%2721881147-5d06-4a47-ac86-3c8a1874470b%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:40895->172.30.0.10:53: read: connection refused I0216 13:35:54.717915 1 requests.go:204] Asking for SCA certificate for x86_64 architecture I0216 13:35:54.719417 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0216 13:35:54.719419 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController I0216 13:35:54.719439 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file W0216 13:35:54.720465 1 sca.go:117] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:55255->172.30.0.10:53: read: connection refused I0216 13:35:54.720480 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:55255->172.30.0.10:53: read: connection refused I0216 13:35:54.720832 1 prometheus_rules.go:88] Prometheus rules successfully created I0216 13:35:54.722914 1 base_controller.go:73] Caches are synced for ConfigController I0216 13:35:54.722926 1 base_controller.go:110] Starting #1 worker of ConfigController controller ... I0216 13:35:54.727794 1 tasks_processing.go:74] worker 20 stopped. E0216 13:35:54.727805 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0216 13:35:54.727809 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ogcmf76erq2mct18ibrqt3r3gt3kn7c-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ogcmf76erq2mct18ibrqt3r3gt3kn7c-primary-cert-bundle-secret" not found I0216 13:35:54.727864 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=5cbf0c233f8c0ac68abce9fb1d79ea0f38aa40f1fe709475c58100c93afd4b62 I0216 13:35:54.727874 1 gather.go:180] gatherer "clusterconfig" function "ingress_certificates" took 78.903318ms to process 1 records I0216 13:35:54.731697 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 13:35:54.732871 1 tasks_processing.go:74] worker 37 stopped. E0216 13:35:54.732885 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0216 13:35:54.732891 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0216 13:35:54.732896 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0216 13:35:54.732914 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0216 13:35:54.732924 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0216 13:35:54.732927 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=0bddb88b072029f25dde6f44cb877a44fb2f65ed4864939fbf7a3e42c0a485f6 I0216 13:35:54.732930 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0216 13:35:54.732946 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0216 13:35:54.732956 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0216 13:35:54.732961 1 gather.go:180] gatherer "clusterconfig" function "config_maps" took 83.567879ms to process 6 records I0216 13:35:54.802717 1 base_controller.go:73] Caches are synced for LoggingSyncer I0216 13:35:54.802731 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... I0216 13:35:55.108591 1 gather_cluster_operator_pods_and_events.go:119] Found 18 pods with 21 containers I0216 13:35:55.108604 1 gather_cluster_operator_pods_and_events.go:233] Maximum buffer size: 1198372 bytes I0216 13:35:55.109046 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-6g8qc pod in namespace openshift-dns (previous: false). I0216 13:35:55.331131 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-6g8qc pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-6g8qc\" is waiting to start: ContainerCreating" I0216 13:35:55.331149 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-6g8qc\" is waiting to start: ContainerCreating" I0216 13:35:55.331156 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-6g8qc pod in namespace openshift-dns (previous: false). I0216 13:35:55.512993 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-6g8qc pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-6g8qc\" is waiting to start: ContainerCreating" I0216 13:35:55.513010 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-6g8qc\" is waiting to start: ContainerCreating" I0216 13:35:55.513020 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-qlw56 pod in namespace openshift-dns (previous: false). W0216 13:35:55.683013 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 13:35:55.739311 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-qlw56 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-qlw56\" is waiting to start: ContainerCreating" I0216 13:35:55.739328 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-qlw56\" is waiting to start: ContainerCreating" I0216 13:35:55.739334 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-qlw56 pod in namespace openshift-dns (previous: false). I0216 13:35:55.916357 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-qlw56 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-qlw56\" is waiting to start: ContainerCreating" I0216 13:35:55.916373 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-qlw56\" is waiting to start: ContainerCreating" I0216 13:35:55.916394 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-s8m7m pod in namespace openshift-dns (previous: false). I0216 13:35:55.921465 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0216 13:35:56.132078 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-s8m7m pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-s8m7m\" is waiting to start: ContainerCreating" I0216 13:35:56.132093 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-s8m7m\" is waiting to start: ContainerCreating" I0216 13:35:56.132100 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-s8m7m pod in namespace openshift-dns (previous: false). I0216 13:35:56.313296 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-s8m7m pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-s8m7m\" is waiting to start: ContainerCreating" I0216 13:35:56.313312 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-s8m7m\" is waiting to start: ContainerCreating" I0216 13:35:56.313320 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-5m624 pod in namespace openshift-dns (previous: false). I0216 13:35:56.320930 1 tasks_processing.go:74] worker 29 stopped. I0216 13:35:56.320977 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=e13cb0ffcb88106169d958b85a41a47fd5d9e171f9b834f964d5687276c52ba1 I0216 13:35:56.321011 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=673d6c6809d4938e50116b4fb1c29c7c0aaf6e67d872039f566bbe065d51c382 I0216 13:35:56.321048 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0216 13:35:56.321071 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=fd1f97fd06c450cc1f882121650ee94da0527232707b6fba83b2ec6dcf53fa20 I0216 13:35:56.321089 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0216 13:35:56.321106 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=ba2ce0dc91680083b1999a96e85a2e59540eef39d578a5d66fefb3a319778d0e I0216 13:35:56.321126 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=d8a0b5fe8b0b1f0086b9974290ac79b15d04d07613781f5483ec7c70d151d813 I0216 13:35:56.321144 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=52490e684bba0bb93be1aae136f726018c48bbda1a3e0e9319f378cf9e097a24 I0216 13:35:56.321156 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=6eca55b4e7a7fe670b6fcdd1af0992aa316a521911dc990ee1621019b76f1b4e I0216 13:35:56.321171 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=6752b15e0be1e75cd7d2bff059f998bb054b8a56308f142011212047588becc2 I0216 13:35:56.321179 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0216 13:35:56.321189 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=8404a4252572ad0e51cc8de7f61f68f53eef236b462414d9780c9fab3742193c I0216 13:35:56.321198 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0216 13:35:56.321209 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=70d654b0045d6b82849eaf8c2ab1823a16185865b4def1363d88a8eaafd9b861 I0216 13:35:56.321216 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0216 13:35:56.321228 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=b2d3bf06d29cb949c30ca6709e36096f4fb18a2e185d7750ddc0b12833478564 I0216 13:35:56.321243 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0216 13:35:56.321258 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=4aee9dd890c54865045fe1090aaa55c5740a635ec41a41f8f163154f3105080a I0216 13:35:56.321323 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=06b4871ed99f5012e3e632c9e3800468e510d0279f14a518744a70d5469ff69b I0216 13:35:56.321333 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0216 13:35:56.321339 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0216 13:35:56.321358 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0216 13:35:56.321373 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=356fcbc46954c44ca039d7f1cb3b6b765300336fcbc7ab01b27e41e2d53c380b I0216 13:35:56.321390 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=45558f8b89911254aece577dc23913300128e4e9e7f39ea5ec89815f92ff2175 I0216 13:35:56.321398 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0216 13:35:56.321409 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=9b2f37dc72912670b27bf807feabadd421e2dcd1f349a3e340933d0a191361d8 I0216 13:35:56.321417 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0216 13:35:56.321427 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=4ebe8ef4a4f87b8b1d7f95ca96b9952ceaac64aff19a7a6f672ac23a866fd44b I0216 13:35:56.321438 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=71bf3c7b551218c69fb7b8e9ab9ccb48f9f44086371d6e06238a875e1ad7e5ba I0216 13:35:56.321446 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=a517e85bd2ccc94d27b0a652217ff49a8e3437b6b3876eda9020d6f734e9f9cb I0216 13:35:56.321458 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=e1d6ba5e5f624e26d0899fa4af7514726744a3209ef88fb917bf76969dfa66bc I0216 13:35:56.321469 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=9ed70521435753cfce20f7a00bde747757eabfd38eae936ff8fc76894eb73959 I0216 13:35:56.321485 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=6615f753a35d549102da6973460f901c31b910ce9b2166e736ce10cee4aa1e42 I0216 13:35:56.321499 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=7e1ab8f8cfcd9d249b5b213939fe5144bb83db3725475461728bea44a002c3be I0216 13:35:56.321506 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0216 13:35:56.321512 1 gather.go:180] gatherer "clusterconfig" function "operators" took 1.671592541s to process 35 records I0216 13:35:56.513803 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:56.513822 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-f92sp pod in namespace openshift-dns (previous: false). W0216 13:35:56.683045 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 13:35:56.714045 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:56.714062 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-pg582 pod in namespace openshift-dns (previous: false). I0216 13:35:56.912639 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:56.912705 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-6c784586d7-hmcck pod in namespace openshift-image-registry (previous: false). I0216 13:35:57.113795 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-6c784586d7-hmcck pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6c784586d7-hmcck\" is waiting to start: ContainerCreating" I0216 13:35:57.113813 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-6c784586d7-hmcck\" is waiting to start: ContainerCreating" I0216 13:35:57.113853 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-6dccd56644-5shk4 pod in namespace openshift-image-registry (previous: false). I0216 13:35:57.312359 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-6dccd56644-5shk4 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6dccd56644-5shk4\" is waiting to start: ContainerCreating" I0216 13:35:57.312374 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-6dccd56644-5shk4\" is waiting to start: ContainerCreating" I0216 13:35:57.312412 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-6dccd56644-nlbjr pod in namespace openshift-image-registry (previous: false). I0216 13:35:57.515381 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for image-registry-6dccd56644-nlbjr pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6dccd56644-nlbjr\" is waiting to start: ContainerCreating" I0216 13:35:57.515398 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"registry\" in pod \"image-registry-6dccd56644-nlbjr\" is waiting to start: ContainerCreating" I0216 13:35:57.515408 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-cttdc pod in namespace openshift-image-registry (previous: false). W0216 13:35:57.683651 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 13:35:57.712342 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:57.712360 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-lhdkv pod in namespace openshift-image-registry (previous: false). I0216 13:35:57.914578 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:57.914593 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-nj76c pod in namespace openshift-image-registry (previous: false). I0216 13:35:58.113405 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0216 13:35:58.113426 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-69584f64bb-mqppj pod in namespace openshift-ingress (previous: false). I0216 13:35:58.313342 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-69584f64bb-mqppj pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-69584f64bb-mqppj\" is waiting to start: ContainerCreating" I0216 13:35:58.313360 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-69584f64bb-mqppj\" is waiting to start: ContainerCreating" I0216 13:35:58.313371 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-69584f64bb-xsx74 pod in namespace openshift-ingress (previous: false). I0216 13:35:58.514740 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-69584f64bb-xsx74 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-69584f64bb-xsx74\" is waiting to start: ContainerCreating" I0216 13:35:58.514756 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-69584f64bb-xsx74\" is waiting to start: ContainerCreating" I0216 13:35:58.514764 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-6d865b5587-q8kh9 pod in namespace openshift-ingress (previous: false). W0216 13:35:58.683875 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0216 13:35:58.715864 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for router-default-6d865b5587-q8kh9 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6d865b5587-q8kh9\" is waiting to start: ContainerCreating" I0216 13:35:58.715876 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"router\" in pod \"router-default-6d865b5587-q8kh9\" is waiting to start: ContainerCreating" I0216 13:35:58.715907 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-h7dxq pod in namespace openshift-ingress-canary (previous: false). I0216 13:35:58.913765 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-h7dxq pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-h7dxq\" is waiting to start: ContainerCreating" I0216 13:35:58.913784 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-h7dxq\" is waiting to start: ContainerCreating" I0216 13:35:58.913812 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-m8g7c pod in namespace openshift-ingress-canary (previous: false). I0216 13:35:59.113262 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-m8g7c pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-m8g7c\" is waiting to start: ContainerCreating" I0216 13:35:59.113278 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-m8g7c\" is waiting to start: ContainerCreating" I0216 13:35:59.113307 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-vf8b4 pod in namespace openshift-ingress-canary (previous: false). I0216 13:35:59.314577 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-vf8b4 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-vf8b4\" is waiting to start: ContainerCreating" I0216 13:35:59.314596 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-vf8b4\" is waiting to start: ContainerCreating" I0216 13:35:59.314615 1 tasks_processing.go:74] worker 31 stopped. I0216 13:35:59.314721 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=a199164bb466a724e3282142da277e33f2bf8391c5b290b59d828161cd424e66 I0216 13:35:59.314768 1 recorder.go:75] Recording events/openshift-dns with fingerprint=c7cb19f88a5a9503e52ba4a07e954ee46e09295c23c1fd96892342b5166f3b40 I0216 13:35:59.314836 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=72828cd185abf151882de7602f1d2eb2f7164e2852df7a087b616c5600c24ea4 I0216 13:35:59.314863 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=887d00065e270630780b4bb2befd6f3de2aa1bf7274df6041a334c13186356ed I0216 13:35:59.314904 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=e7e5fd3e9d83de6ba2db07b92d5d8800f0a47611f9b5a965a4b3ba76e5b02913 I0216 13:35:59.314925 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=a648b25cd3fd3dd5e0b9dfe1971047a38da38dcdcc791f2d9b3f91a680ae89ef I0216 13:35:59.315038 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-6c784586d7-hmcck with fingerprint=17344aacc35c0a0fa636027b90a8983cddd04f8d3c6070ade86de548047ab5d3 I0216 13:35:59.315110 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-6dccd56644-5shk4 with fingerprint=94e0d756ae9516bf33ad5b43aa7ae41791eaa3b29bd85df7c5d768b3a3605a8b I0216 13:35:59.315168 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-6dccd56644-nlbjr with fingerprint=f2412911ce2464bf20a20bb588528429f13fef884cdbf68921eca578e5d53bff I0216 13:35:59.315211 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-h7dxq with fingerprint=14619d1909340eb525cafedf3a098cce39e7c64e8f5515e5b814590ebc28d59e I0216 13:35:59.315252 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-m8g7c with fingerprint=eed7ae15bfb28ca86be781a529bf6c6a62a4dcee523320587f28a5fcdf60293f I0216 13:35:59.315291 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-vf8b4 with fingerprint=19748e6633914396eda380c7411d036cec6bb7267c64766b84ec21b8f5c84ba6 I0216 13:35:59.315305 1 gather.go:180] gatherer "clusterconfig" function "operators_pods_and_events" took 4.663938324s to process 12 records W0216 13:35:59.680246 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 13:35:59.680274 1 tasks_processing.go:74] worker 60 stopped. E0216 13:35:59.680283 1 gather.go:143] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0216 13:35:59.680291 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0216 13:35:59.680304 1 gather.go:158] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0216 13:35:59.680312 1 gather.go:180] gatherer "clusterconfig" function "dvo_metrics" took 5.031412473s to process 1 records I0216 13:36:07.093176 1 tasks_processing.go:74] worker 19 stopped. I0216 13:36:07.093203 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0216 13:36:07.093213 1 gather.go:180] gatherer "clusterconfig" function "install_plans" took 12.444135885s to process 1 records I0216 13:36:07.856792 1 tasks_processing.go:74] worker 56 stopped. I0216 13:36:07.856990 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=a2f40d32b1fce5616a6158667c9ac0692bcb5b15e963bc145f33dd04ddc2a588 I0216 13:36:07.857004 1 gather.go:180] gatherer "clusterconfig" function "service_accounts" took 13.206238062s to process 1 records E0216 13:36:07.857058 1 periodic.go:252] clusterconfig failed after 13.208s with: function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "ingress_certificates" failed with an error, function "config_maps" failed with an error, function "dvo_metrics" failed with an error I0216 13:36:07.857070 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "ingress_certificates" failed with an error, function "config_maps" failed with an error, function "dvo_metrics" failed with an error I0216 13:36:07.857075 1 periodic.go:214] Running workloads gatherer I0216 13:36:07.857086 1 tasks_processing.go:45] number of workers: 2 I0216 13:36:07.857095 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 13:36:07.857098 1 tasks_processing.go:71] worker 1 working on workload_info task. I0216 13:36:07.857110 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 13:36:07.857183 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0216 13:36:07.876695 1 gather_workloads_info.go:257] Loaded pods in 0s, will wait 22s for image data I0216 13:36:07.888801 1 gather_workloads_info.go:366] No image sha256:b34e84d56775e42b7d832d14c4f9dc302fee37cd81ba221397cd8acba2089d20 (13ms) I0216 13:36:07.889869 1 tasks_processing.go:74] worker 0 stopped. I0216 13:36:07.889883 1 gather.go:180] gatherer "workloads" function "helmchart_info" took 32.670028ms to process 0 records I0216 13:36:07.897442 1 gather_workloads_info.go:366] No image sha256:2bf8536171476b2d616cf62b4d94d2f1dae34aca6ea6bfdb65e764a8d9675891 (9ms) I0216 13:36:07.905836 1 gather_workloads_info.go:366] No image sha256:79449e16b1207223f1209d19888b879eb56a8202c53df4800e09b231392cf219 (8ms) I0216 13:36:07.914230 1 gather_workloads_info.go:366] No image sha256:0d1d37dbdb726e924b519ef27e52e9719601fab838ae75f72c8aca11e8c3b4cc (8ms) I0216 13:36:07.922634 1 gather_workloads_info.go:366] No image sha256:0f31e990f9ca9d15dcb1b25325c8265515fcc06381909349bb021103827585c6 (8ms) I0216 13:36:07.931236 1 gather_workloads_info.go:366] No image sha256:822db36f8e1353ac24785b88d1fb2150d3ef34a5e739c1f67b61079336e9798b (9ms) I0216 13:36:07.940156 1 gather_workloads_info.go:366] No image sha256:29d1672ef44c59d065737eca330075dd2f6da4ba743153973a739aa9e9d73ad3 (9ms) I0216 13:36:07.949175 1 gather_workloads_info.go:366] No image sha256:745f2186738a57bb1b484f68431e77aa2f68a1b8dcb434b1f7a4b429eafdf091 (9ms) I0216 13:36:07.959352 1 gather_workloads_info.go:366] No image sha256:5335f64616c3a6c55a9a6dc4bc084b46a4957fb4fc250afc5343e4547ebb3598 (10ms) I0216 13:36:07.967720 1 gather_workloads_info.go:366] No image sha256:c822bd444a7bc53b21afb9372ff0a24961b2687073f3563c127cce5803801b04 (8ms) I0216 13:36:07.985801 1 gather_workloads_info.go:366] No image sha256:712ad2760c350db1e23b9393bdda83149452931dc7b5a5038a3bcdb4663917c0 (18ms) I0216 13:36:08.086728 1 gather_workloads_info.go:366] No image sha256:2193d7361704b0ae4bca052e9158761e06ecbac9ca3f0a9c8f0f101127e8f370 (101ms) I0216 13:36:08.186112 1 gather_workloads_info.go:366] No image sha256:64ef34275f7ea992f5a4739cf7a724e55806bfab0c752fc0eccc2f70dfecbaf4 (99ms) I0216 13:36:08.285965 1 gather_workloads_info.go:366] No image sha256:357821852af925e0c8a19df2f9fceec8d2e49f9d13575b86ecd3fbedce488afa (100ms) I0216 13:36:08.389163 1 gather_workloads_info.go:366] No image sha256:2121717e0222b9e8892a44907b461a4f62b3f1e5429a0e2eee802d48d04fff30 (103ms) I0216 13:36:08.486212 1 gather_workloads_info.go:366] No image sha256:88e6cc2192e682bb9c4ac5aec8e41254696d909c5dc337e720b9ec165a728064 (97ms) I0216 13:36:08.586167 1 gather_workloads_info.go:366] No image sha256:43e426ac9df633be58006907aede6f9b6322c6cc7985cd43141ad7518847c637 (100ms) I0216 13:36:08.686983 1 gather_workloads_info.go:366] No image sha256:586e9c2756f50e562a6123f47fe38dba5496b63413c3dd18e0b85d6167094f0c (101ms) I0216 13:36:08.787215 1 gather_workloads_info.go:366] No image sha256:33d7e5c63340e93b5a063de538017ac693f154e3c27ee2ef8a8a53bb45583552 (100ms) I0216 13:36:08.886234 1 gather_workloads_info.go:366] No image sha256:f550296753e9898c67d563b7deb16ba540ca1367944c905415f35537b6b949d4 (99ms) I0216 13:36:08.986087 1 gather_workloads_info.go:366] No image sha256:036e6f9a4609a7499f200032dac2294e4a2d98764464ed17453ef725f2f0264d (100ms) I0216 13:36:09.013356 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 13:36:09.085756 1 gather_workloads_info.go:366] No image sha256:9cc55a501aaad1adbefdd573e57c2f756a3a6a8723c43052995be6389edf1fa8 (100ms) I0216 13:36:09.185921 1 gather_workloads_info.go:366] No image sha256:457372d9f22e1c726ea1a6fcc54ddca8335bd607d2c357bcd7b63a7017aa5c2b (100ms) I0216 13:36:09.215627 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 13:36:09.286461 1 gather_workloads_info.go:366] No image sha256:185305b7da4ef5b90a90046f145e8c66bab3a16b12771d2e98bf78104d6a60f2 (101ms) I0216 13:36:09.387080 1 gather_workloads_info.go:366] No image sha256:7f55b7dbfb15fe36d83d64027eacee22fb00688ccbc03550cc2dbedfa633f288 (101ms) I0216 13:36:09.486194 1 gather_workloads_info.go:366] No image sha256:91d9cb208e6d0c39a87dfe8276d162c75ff3fcd3b005b3e7b537f65c53475a42 (99ms) I0216 13:36:09.585853 1 gather_workloads_info.go:366] No image sha256:27e725f1250f6a17da5eba7ada315a244592b5b822d61e95722bb7e2f884b00f (100ms) I0216 13:36:09.687756 1 gather_workloads_info.go:366] No image sha256:f82357030795138d2081ecc5172092222b0f4faea27e9a7a0474fbeae29111ad (102ms) I0216 13:36:09.787724 1 gather_workloads_info.go:366] No image sha256:deffb0293fd11f5b40609aa9e80b16b0f90a9480013b2b7f61bd350bbd9b6f07 (100ms) I0216 13:36:09.887170 1 gather_workloads_info.go:366] No image sha256:59f553035bc347fc7205f1c071897bc2606b98525d6b9a3aca62fc9cd7078a57 (99ms) I0216 13:36:09.985408 1 gather_workloads_info.go:366] No image sha256:29e41a505a942a77c0d5f954eb302c01921cb0c0d176066fe63f82f3e96e3923 (98ms) I0216 13:36:09.985431 1 tasks_processing.go:74] worker 1 stopped. I0216 13:36:09.985645 1 recorder.go:75] Recording config/workload_info with fingerprint=6b1a46ab6fb64fdba0966287934850ff865c2b0c50784c2f541b06a422f3ece5 I0216 13:36:09.985665 1 gather.go:180] gatherer "workloads" function "workload_info" took 2.128326808s to process 1 records I0216 13:36:09.985703 1 periodic.go:261] Periodic gather workloads completed in 2.128s I0216 13:36:09.985715 1 controllerstatus.go:80] name=periodic-workloads healthy=true reason= message= I0216 13:36:09.985720 1 periodic.go:214] Running conditional gatherer I0216 13:36:09.990654 1 requests.go:282] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules I0216 13:36:09.995265 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.9:47029->172.30.0.10:53: read: connection refused E0216 13:36:09.995486 1 conditional_gatherer.go:324] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0216 13:36:09.995536 1 conditional_gatherer.go:386] updating version cache for conditional gatherer I0216 13:36:10.001131 1 conditional_gatherer.go:394] cluster version is '4.17.48' E0216 13:36:10.001144 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001149 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001153 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001157 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001160 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001165 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001169 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001174 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0216 13:36:10.001177 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing I0216 13:36:10.001193 1 tasks_processing.go:45] number of workers: 3 I0216 13:36:10.001212 1 tasks_processing.go:69] worker 2 listening for tasks. I0216 13:36:10.001219 1 tasks_processing.go:69] worker 1 listening for tasks. I0216 13:36:10.001211 1 tasks_processing.go:69] worker 0 listening for tasks. I0216 13:36:10.001221 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0216 13:36:10.001230 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0216 13:36:10.001226 1 tasks_processing.go:71] worker 1 working on rapid_container_logs task. I0216 13:36:10.001295 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0216 13:36:10.001310 1 gather.go:180] gatherer "conditional" function "conditional_gatherer_rules" took 1.139µs to process 1 records I0216 13:36:10.001335 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0216 13:36:10.001351 1 gather.go:180] gatherer "conditional" function "remote_configuration" took 1.366µs to process 1 records I0216 13:36:10.001233 1 tasks_processing.go:74] worker 2 stopped. I0216 13:36:10.001357 1 tasks_processing.go:74] worker 0 stopped. I0216 13:36:10.001383 1 tasks_processing.go:74] worker 1 stopped. I0216 13:36:10.001392 1 gather.go:180] gatherer "conditional" function "rapid_container_logs" took 144.997µs to process 0 records I0216 13:36:10.001412 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.17.48/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.9:47029->172.30.0.10:53: read: connection refused I0216 13:36:10.001425 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 W0216 13:36:10.031448 1 gather.go:212] can't read cgroups memory usage data: open /sys/fs/cgroup/memory/memory.usage_in_bytes: no such file or directory I0216 13:36:10.031540 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=7d08a21f4e37fff485622452ae4d066b3ff827d189b33dc3c7679144d0ff4255 I0216 13:36:10.031666 1 diskrecorder.go:70] Writing 106 records to /var/lib/insights-operator/insights-2026-02-16-133610.tar.gz I0216 13:36:10.037174 1 diskrecorder.go:51] Wrote 106 records to disk in 5ms I0216 13:36:10.037199 1 periodic.go:283] Gathering cluster info every 2h0m0s I0216 13:36:10.037216 1 periodic.go:284] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0216 13:36:19.052068 1 configmapobserver.go:84] configmaps "insights-config" not found I0216 13:37:14.339166 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="4b0d09b928f26ea874d9570463786f73814f155ffd2ad8ee838354fac1d49b1a") W0216 13:37:14.339203 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0216 13:37:14.339230 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="ce3852b93633ff802bb37b275551edb94c307e34974141f819f487fc1245da6d") I0216 13:37:14.339299 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="f23654f350496e1f086aa5c01e61a99e21a1dd277f27a6d8e52d0a91745c6642") I0216 13:37:14.339305 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0216 13:37:14.339328 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0216 13:37:14.339353 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0216 13:37:14.339358 1 base_controller.go:172] Shutting down ConfigController ... I0216 13:37:14.339378 1 periodic.go:175] Shutting down I0216 13:37:14.339403 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" E0216 13:37:14.339416 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled