W0429 08:47:57.453921 1 cmd.go:257] Using insecure, self-signed certificates I0429 08:47:57.855357 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:47:57.855741 1 observer_polling.go:159] Starting file observer I0429 08:47:58.102304 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0429 08:47:58.102543 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0429 08:47:58.103072 1 secure_serving.go:57] Forcing use of http/1.1 only I0429 08:47:58.103080 1 simple_featuregate_reader.go:171] Starting feature-gate-detector W0429 08:47:58.103094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0429 08:47:58.103099 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0429 08:47:58.103103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0429 08:47:58.103106 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0429 08:47:58.103109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0429 08:47:58.103113 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0429 08:47:58.106697 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0429 08:47:58.106702 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"f4fda090-58fe-40c2-b261-2f1eb8bc3d17", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0429 08:47:58.106885 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0429 08:47:58.106899 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0429 08:47:58.106920 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0429 08:47:58.106930 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0429 08:47:58.106947 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0429 08:47:58.106956 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0429 08:47:58.107198 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-3342022625/tls.crt::/tmp/serving-cert-3342022625/tls.key" I0429 08:47:58.107637 1 secure_serving.go:213] Serving securely on [::]:8443 I0429 08:47:58.107655 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0429 08:47:58.110909 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0429 08:47:58.110942 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0429 08:47:58.110979 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0429 08:47:58.116414 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0429 08:47:58.116432 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0429 08:47:58.119749 1 secretconfigobserver.go:119] support secret does not exist I0429 08:47:58.123001 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0429 08:47:58.125991 1 secretconfigobserver.go:119] support secret does not exist I0429 08:47:58.128089 1 recorder.go:161] Pruning old reports every 7h33m24s, max age is 288h0m0s I0429 08:47:58.131735 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0429 08:47:58.131748 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0429 08:47:58.131767 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0429 08:47:58.131783 1 insightsreport.go:296] Starting report retriever I0429 08:47:58.131792 1 periodic.go:209] Running clusterconfig gatherer I0429 08:47:58.131792 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0429 08:47:58.131834 1 tasks_processing.go:45] number of workers: 64 I0429 08:47:58.131865 1 tasks_processing.go:69] worker 5 listening for tasks. I0429 08:47:58.131875 1 tasks_processing.go:69] worker 2 listening for tasks. I0429 08:47:58.131878 1 tasks_processing.go:69] worker 3 listening for tasks. I0429 08:47:58.131883 1 tasks_processing.go:69] worker 1 listening for tasks. I0429 08:47:58.131886 1 tasks_processing.go:69] worker 4 listening for tasks. I0429 08:47:58.131882 1 tasks_processing.go:69] worker 0 listening for tasks. I0429 08:47:58.131891 1 tasks_processing.go:69] worker 44 listening for tasks. I0429 08:47:58.131894 1 tasks_processing.go:69] worker 31 listening for tasks. I0429 08:47:58.131901 1 tasks_processing.go:69] worker 34 listening for tasks. I0429 08:47:58.131897 1 tasks_processing.go:69] worker 37 listening for tasks. I0429 08:47:58.131897 1 tasks_processing.go:69] worker 38 listening for tasks. I0429 08:47:58.131904 1 tasks_processing.go:69] worker 42 listening for tasks. I0429 08:47:58.131910 1 tasks_processing.go:69] worker 43 listening for tasks. I0429 08:47:58.131911 1 tasks_processing.go:69] worker 36 listening for tasks. I0429 08:47:58.131913 1 tasks_processing.go:69] worker 33 listening for tasks. I0429 08:47:58.131919 1 tasks_processing.go:69] worker 58 listening for tasks. I0429 08:47:58.131904 1 tasks_processing.go:69] worker 35 listening for tasks. I0429 08:47:58.131921 1 tasks_processing.go:71] worker 36 working on ceph_cluster task. I0429 08:47:58.131928 1 tasks_processing.go:69] worker 19 listening for tasks. I0429 08:47:58.131929 1 tasks_processing.go:69] worker 53 listening for tasks. I0429 08:47:58.131928 1 tasks_processing.go:69] worker 59 listening for tasks. I0429 08:47:58.131936 1 tasks_processing.go:69] worker 60 listening for tasks. I0429 08:47:58.131921 1 tasks_processing.go:71] worker 43 working on dvo_metrics task. I0429 08:47:58.131940 1 tasks_processing.go:69] worker 39 listening for tasks. I0429 08:47:58.131945 1 tasks_processing.go:69] worker 57 listening for tasks. I0429 08:47:58.131908 1 tasks_processing.go:69] worker 32 listening for tasks. I0429 08:47:58.131950 1 tasks_processing.go:69] worker 11 listening for tasks. I0429 08:47:58.131924 1 tasks_processing.go:69] worker 52 listening for tasks. I0429 08:47:58.131955 1 tasks_processing.go:69] worker 54 listening for tasks. I0429 08:47:58.131960 1 tasks_processing.go:69] worker 12 listening for tasks. I0429 08:47:58.131962 1 tasks_processing.go:69] worker 25 listening for tasks. I0429 08:47:58.131963 1 tasks_processing.go:69] worker 47 listening for tasks. I0429 08:47:58.131968 1 tasks_processing.go:71] worker 58 working on image_registries task. I0429 08:47:58.131971 1 tasks_processing.go:71] worker 25 working on olm_operators task. I0429 08:47:58.131974 1 tasks_processing.go:69] worker 17 listening for tasks. I0429 08:47:58.131964 1 tasks_processing.go:69] worker 55 listening for tasks. I0429 08:47:58.131981 1 tasks_processing.go:69] worker 18 listening for tasks. I0429 08:47:58.131981 1 tasks_processing.go:69] worker 30 listening for tasks. I0429 08:47:58.131974 1 tasks_processing.go:71] worker 47 working on pod_network_connectivity_checks task. I0429 08:47:58.131988 1 tasks_processing.go:69] worker 28 listening for tasks. I0429 08:47:58.131991 1 tasks_processing.go:69] worker 50 listening for tasks. I0429 08:47:58.131996 1 tasks_processing.go:71] worker 60 working on ingress_certificates task. I0429 08:47:58.131962 1 tasks_processing.go:71] worker 52 working on metrics task. I0429 08:47:58.131993 1 tasks_processing.go:69] worker 29 listening for tasks. I0429 08:47:58.132005 1 tasks_processing.go:71] worker 18 working on operators task. I0429 08:47:58.132004 1 tasks_processing.go:71] worker 50 working on config_maps task. I0429 08:47:58.131937 1 tasks_processing.go:69] worker 9 listening for tasks. I0429 08:47:58.132017 1 tasks_processing.go:71] worker 32 working on nodenetworkconfigurationpolicies task. I0429 08:47:58.131935 1 tasks_processing.go:69] worker 20 listening for tasks. I0429 08:47:58.132015 1 tasks_processing.go:69] worker 49 listening for tasks. I0429 08:47:58.132029 1 tasks_processing.go:69] worker 14 listening for tasks. I0429 08:47:58.131941 1 tasks_processing.go:69] worker 56 listening for tasks. I0429 08:47:58.132030 1 tasks_processing.go:71] worker 11 working on openstack_version task. W0429 08:47:58.132031 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:47:58.132039 1 tasks_processing.go:71] worker 39 working on crds task. I0429 08:47:58.132049 1 tasks_processing.go:71] worker 56 working on storage_cluster task. I0429 08:47:58.132053 1 tasks_processing.go:69] worker 61 listening for tasks. I0429 08:47:58.132059 1 tasks_processing.go:71] worker 61 working on openshift_machine_api_events task. I0429 08:47:58.132062 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 43.115µs to process 0 records I0429 08:47:58.132025 1 tasks_processing.go:71] worker 57 working on image_pruners task. I0429 08:47:58.131930 1 tasks_processing.go:69] worker 8 listening for tasks. I0429 08:47:58.131940 1 tasks_processing.go:69] worker 51 listening for tasks. I0429 08:47:58.131946 1 tasks_processing.go:69] worker 22 listening for tasks. I0429 08:47:58.131952 1 tasks_processing.go:69] worker 23 listening for tasks. I0429 08:47:58.131952 1 tasks_processing.go:69] worker 45 listening for tasks. I0429 08:47:58.131956 1 tasks_processing.go:69] worker 24 listening for tasks. I0429 08:47:58.131958 1 tasks_processing.go:69] worker 46 listening for tasks. I0429 08:47:58.131964 1 tasks_processing.go:71] worker 54 working on infrastructures task. I0429 08:47:58.132161 1 tasks_processing.go:71] worker 35 working on schedulers task. I0429 08:47:58.132198 1 tasks_processing.go:71] worker 12 working on qemu_kubevirt_launcher_logs task. I0429 08:47:58.131916 1 tasks_processing.go:69] worker 6 listening for tasks. I0429 08:47:58.131968 1 tasks_processing.go:69] worker 16 listening for tasks. I0429 08:47:58.131969 1 tasks_processing.go:69] worker 26 listening for tasks. I0429 08:47:58.131974 1 tasks_processing.go:69] worker 27 listening for tasks. I0429 08:47:58.131968 1 tasks_processing.go:69] worker 40 listening for tasks. I0429 08:47:58.131980 1 tasks_processing.go:71] worker 17 working on monitoring_persistent_volumes task. I0429 08:47:58.132019 1 tasks_processing.go:71] worker 9 working on cluster_apiserver task. I0429 08:47:58.132010 1 tasks_processing.go:71] worker 29 working on machine_config_pools task. I0429 08:47:58.132533 1 tasks_processing.go:71] worker 19 working on machine_healthchecks task. I0429 08:47:58.131988 1 tasks_processing.go:71] worker 55 working on machine_autoscalers task. I0429 08:47:58.132850 1 tasks_processing.go:71] worker 53 working on sap_datahubs task. I0429 08:47:58.132889 1 tasks_processing.go:71] worker 3 working on silenced_alerts task. I0429 08:47:58.132917 1 tasks_processing.go:71] worker 5 working on lokistack task. I0429 08:47:58.132051 1 tasks_processing.go:71] worker 52 working on certificate_signing_requests task. I0429 08:47:58.131965 1 tasks_processing.go:71] worker 33 working on aggregated_monitoring_cr_names task. I0429 08:47:58.131983 1 tasks_processing.go:69] worker 15 listening for tasks. I0429 08:47:58.131925 1 tasks_processing.go:69] worker 41 listening for tasks. I0429 08:47:58.132026 1 tasks_processing.go:71] worker 20 working on machines task. I0429 08:47:58.131998 1 tasks_processing.go:71] worker 28 working on openstack_dataplanenodesets task. I0429 08:47:58.131941 1 tasks_processing.go:69] worker 21 listening for tasks. I0429 08:47:58.132045 1 tasks_processing.go:71] worker 4 working on openstack_dataplanedeployments task. I0429 08:47:58.132004 1 tasks_processing.go:69] worker 13 listening for tasks. I0429 08:47:58.133191 1 tasks_processing.go:71] worker 31 working on authentication task. I0429 08:47:58.131943 1 tasks_processing.go:69] worker 10 listening for tasks. I0429 08:47:58.133211 1 tasks_processing.go:71] worker 38 working on ingress task. I0429 08:47:58.133262 1 tasks_processing.go:71] worker 59 working on oauths task. I0429 08:47:58.133294 1 tasks_processing.go:71] worker 37 working on proxies task. I0429 08:47:58.133347 1 tasks_processing.go:71] worker 42 working on overlapping_namespace_uids task. I0429 08:47:58.133400 1 tasks_processing.go:71] worker 44 working on feature_gates task. I0429 08:47:58.132033 1 tasks_processing.go:71] worker 49 working on nodes task. W0429 08:47:58.132925 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:47:58.133476 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 564.739µs to process 0 records I0429 08:47:58.133488 1 tasks_processing.go:71] worker 2 working on sap_pods task. I0429 08:47:58.133532 1 tasks_processing.go:71] worker 0 working on sap_config task. I0429 08:47:58.133775 1 tasks_processing.go:69] worker 62 listening for tasks. I0429 08:47:58.131998 1 tasks_processing.go:71] worker 30 working on service_accounts task. I0429 08:47:58.133807 1 tasks_processing.go:69] worker 63 listening for tasks. I0429 08:47:58.131921 1 tasks_processing.go:69] worker 7 listening for tasks. I0429 08:47:58.133820 1 tasks_processing.go:71] worker 1 working on cost_management_metrics_configs task. I0429 08:47:58.133932 1 tasks_processing.go:71] worker 8 working on machine_configs task. I0429 08:47:58.133938 1 tasks_processing.go:71] worker 7 working on clusterroles task. I0429 08:47:58.133961 1 tasks_processing.go:71] worker 27 working on operators_pods_and_events task. I0429 08:47:58.133969 1 tasks_processing.go:71] worker 24 working on networks task. I0429 08:47:58.133972 1 tasks_processing.go:71] worker 6 working on openshift_logging task. I0429 08:47:58.133826 1 tasks_processing.go:71] worker 34 working on jaegers task. I0429 08:47:58.134041 1 tasks_processing.go:71] worker 3 working on support_secret task. I0429 08:47:58.134072 1 tasks_processing.go:71] worker 26 working on container_images task. I0429 08:47:58.134007 1 tasks_processing.go:71] worker 46 working on install_plans task. I0429 08:47:58.134021 1 tasks_processing.go:71] worker 13 working on pdbs task. I0429 08:47:58.134027 1 tasks_processing.go:71] worker 40 working on image task. I0429 08:47:58.134031 1 tasks_processing.go:71] worker 15 working on node_logs task. I0429 08:47:58.134035 1 tasks_processing.go:71] worker 41 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0429 08:47:58.134028 1 tasks_processing.go:71] worker 16 working on machine_sets task. I0429 08:47:58.134039 1 tasks_processing.go:71] worker 21 working on container_runtime_configs task. I0429 08:47:58.134050 1 tasks_processing.go:71] worker 22 working on validating_webhook_configurations task. I0429 08:47:58.134053 1 tasks_processing.go:71] worker 51 working on mutating_webhook_configurations task. I0429 08:47:58.134063 1 tasks_processing.go:71] worker 10 working on storage_classes task. I0429 08:47:58.134071 1 tasks_processing.go:71] worker 23 working on active_alerts task. I0429 08:47:58.134084 1 tasks_processing.go:71] worker 45 working on nodenetworkstates task. I0429 08:47:58.134094 1 tasks_processing.go:71] worker 63 working on version task. I0429 08:47:58.134093 1 tasks_processing.go:71] worker 62 working on tsdb_status task. W0429 08:47:58.135900 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:47:58.132043 1 tasks_processing.go:71] worker 14 working on openstack_controlplanes task. I0429 08:47:58.135936 1 tasks_processing.go:74] worker 23 stopped. I0429 08:47:58.131989 1 tasks_processing.go:69] worker 48 listening for tasks. W0429 08:47:58.136553 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:47:58.136590 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 108.63µs to process 0 records I0429 08:47:58.136608 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 661.792µs to process 0 records I0429 08:47:58.136617 1 tasks_processing.go:74] worker 62 stopped. I0429 08:47:58.136173 1 tasks_processing.go:74] worker 48 stopped. I0429 08:47:58.136702 1 tasks_processing.go:74] worker 47 stopped. E0429 08:47:58.136731 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0429 08:47:58.136750 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 4.704948ms to process 0 records I0429 08:47:58.136865 1 tasks_processing.go:74] worker 32 stopped. I0429 08:47:58.136879 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 4.837125ms to process 0 records I0429 08:47:58.136902 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 4.817261ms to process 0 records I0429 08:47:58.136913 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 4.945928ms to process 0 records I0429 08:47:58.136915 1 tasks_processing.go:74] worker 11 stopped. I0429 08:47:58.136921 1 tasks_processing.go:74] worker 36 stopped. I0429 08:47:58.137009 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0429 08:47:58.137025 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0429 08:47:58.137031 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0429 08:47:58.137034 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0429 08:47:58.137047 1 controller.go:489] The operator is still being initialized I0429 08:47:58.137052 1 controller.go:512] The operator is healthy I0429 08:47:58.137402 1 tasks_processing.go:74] worker 56 stopped. I0429 08:47:58.137413 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 5.342709ms to process 0 records I0429 08:47:58.137439 1 tasks_processing.go:74] worker 55 stopped. I0429 08:47:58.137472 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 4.644295ms to process 0 records I0429 08:47:58.138399 1 tasks_processing.go:74] worker 9 stopped. I0429 08:47:58.138578 1 recorder.go:75] Recording config/apiserver with fingerprint=803104141cee9a1509ab72cc1921a1378cd672e42e8f4d9775cccf04cd524976 I0429 08:47:58.138591 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 6.084919ms to process 1 records I0429 08:47:58.138598 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 6.592156ms to process 0 records I0429 08:47:58.138603 1 tasks_processing.go:74] worker 25 stopped. I0429 08:47:58.145528 1 tasks_processing.go:74] worker 0 stopped. I0429 08:47:58.145546 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 11.976103ms to process 0 records I0429 08:47:58.145556 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 12.662808ms to process 0 records I0429 08:47:58.145566 1 tasks_processing.go:74] worker 53 stopped. I0429 08:47:58.145617 1 tasks_processing.go:74] worker 4 stopped. I0429 08:47:58.145631 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 12.461441ms to process 0 records I0429 08:47:58.145896 1 tasks_processing.go:74] worker 35 stopped. I0429 08:47:58.145988 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=e977a2b4d89a0b0b7756f73d02462145a1021544a981c77bb5cdd193d8190c1d I0429 08:47:58.146023 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 13.718023ms to process 1 records I0429 08:47:58.146117 1 tasks_processing.go:74] worker 57 stopped. I0429 08:47:58.146336 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=e09b05f2f754381c71de315de6d7982054109a4e8834a8d93dfeb1b3066962d2 I0429 08:47:58.146352 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 13.923392ms to process 1 records I0429 08:47:58.146444 1 tasks_processing.go:74] worker 38 stopped. I0429 08:47:58.146569 1 recorder.go:75] Recording config/ingress with fingerprint=0dd7b8f636d19c10e4eaceea142971edce70018dfabd8eb7c7599812be168537 I0429 08:47:58.146577 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 12.827238ms to process 1 records E0429 08:47:58.146587 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0429 08:47:58.146596 1 gather.go:177] gatherer "clusterconfig" function "machines" took 13.174985ms to process 0 records I0429 08:47:58.146669 1 tasks_processing.go:74] worker 20 stopped. I0429 08:47:58.146679 1 recorder.go:75] Recording config/featuregate with fingerprint=1ab94a9e4e384027b20610f79bb3208a339c48e83cdb4735c33433ab847018fd I0429 08:47:58.146685 1 tasks_processing.go:74] worker 44 stopped. I0429 08:47:58.146685 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 12.963862ms to process 1 records I0429 08:47:58.146743 1 recorder.go:75] Recording config/proxy with fingerprint=80780f05afeaabad0fb7bf4d49e4ea1f37c50d530d3f96f0fc8ca165629be328 I0429 08:47:58.146752 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 13.265582ms to process 1 records I0429 08:47:58.146755 1 tasks_processing.go:74] worker 37 stopped. I0429 08:47:58.148731 1 tasks_processing.go:74] worker 52 stopped. I0429 08:47:58.148749 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 15.757237ms to process 0 records W0429 08:47:58.149951 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0429 08:47:58.152646 1 tasks_processing.go:74] worker 28 stopped. I0429 08:47:58.152658 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 19.539813ms to process 0 records I0429 08:47:58.152881 1 tasks_processing.go:74] worker 31 stopped. I0429 08:47:58.153018 1 recorder.go:75] Recording config/authentication with fingerprint=81c2a38847989a0621d47c9d77c80bcabf455978693497f463378da124329a19 I0429 08:47:58.153029 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 19.675604ms to process 1 records I0429 08:47:58.160360 1 tasks_processing.go:74] worker 6 stopped. I0429 08:47:58.160372 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 26.376177ms to process 0 records I0429 08:47:58.160380 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 26.453477ms to process 0 records I0429 08:47:58.160384 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 26.882677ms to process 0 records I0429 08:47:58.160388 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 27.443075ms to process 0 records I0429 08:47:58.160391 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 26.3375ms to process 0 records I0429 08:47:58.160396 1 tasks_processing.go:74] worker 34 stopped. I0429 08:47:58.160397 1 tasks_processing.go:74] worker 1 stopped. I0429 08:47:58.160399 1 tasks_processing.go:74] worker 5 stopped. I0429 08:47:58.160403 1 tasks_processing.go:74] worker 2 stopped. I0429 08:47:58.160478 1 tasks_processing.go:74] worker 21 stopped. I0429 08:47:58.160491 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 24.837706ms to process 0 records I0429 08:47:58.160499 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 25.65113ms to process 0 records E0429 08:47:58.160507 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0429 08:47:58.160510 1 tasks_processing.go:74] worker 16 stopped. I0429 08:47:58.160514 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 27.86487ms to process 0 records I0429 08:47:58.160521 1 tasks_processing.go:74] worker 19 stopped. I0429 08:47:58.160531 1 gather_logs.go:145] no pods in namespace were found I0429 08:47:58.160539 1 tasks_processing.go:74] worker 12 stopped. I0429 08:47:58.160545 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 28.329002ms to process 0 records I0429 08:47:58.160634 1 tasks_processing.go:74] worker 54 stopped. I0429 08:47:58.161064 1 recorder.go:75] Recording config/infrastructure with fingerprint=e2f4175e3336d8f9e2074ac9c6c4b4fd0e31b104616b2ede9f1dd649f6a69593 I0429 08:47:58.161076 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 28.473829ms to process 1 records I0429 08:47:58.173549 1 tasks_processing.go:74] worker 14 stopped. I0429 08:47:58.173567 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 37.634631ms to process 0 records I0429 08:47:58.173575 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 37.654564ms to process 0 records I0429 08:47:58.173582 1 tasks_processing.go:74] worker 45 stopped. I0429 08:47:58.173954 1 tasks_processing.go:74] worker 17 stopped. I0429 08:47:58.173976 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 41.655371ms to process 0 records I0429 08:47:58.174345 1 tasks_processing.go:74] worker 3 stopped. E0429 08:47:58.174423 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0429 08:47:58.174440 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 40.256941ms to process 0 records I0429 08:47:58.174510 1 tasks_processing.go:74] worker 40 stopped. I0429 08:47:58.174669 1 recorder.go:75] Recording config/image with fingerprint=9c66af9db9f7bc9bc04bc3666d8fc5ad1b622ddfda134247031ad4a45512f1af I0429 08:47:58.174691 1 gather.go:177] gatherer "clusterconfig" function "image" took 39.858801ms to process 1 records I0429 08:47:58.174706 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 39.837504ms to process 0 records I0429 08:47:58.174742 1 tasks_processing.go:74] worker 15 stopped. I0429 08:47:58.174813 1 tasks_processing.go:74] worker 24 stopped. I0429 08:47:58.174964 1 recorder.go:75] Recording config/network with fingerprint=f1011358a9c5089c7de0926a223e452c12225406a265275b729e606840f35d72 I0429 08:47:58.175018 1 gather.go:177] gatherer "clusterconfig" function "networks" took 40.589122ms to process 1 records I0429 08:47:58.175178 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=2a7e3fd0c2cc4a12d32eeb897b4aaedaf3f2f96854a13ba61679e7b06d31e452 I0429 08:47:58.175202 1 tasks_processing.go:74] worker 13 stopped. I0429 08:47:58.175225 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=592313cbc800771afd7ea74e5536bfa5b90744a7fc6fa485a476a1dfbcbef0a9 I0429 08:47:58.175325 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=6e7d7cf4a8d66828aa3faee14d062aa5fa92bea3479463fd8b0a3c4efe0fd351 I0429 08:47:58.175350 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 40.143746ms to process 3 records I0429 08:47:58.175429 1 tasks_processing.go:74] worker 59 stopped. I0429 08:47:58.176153 1 recorder.go:75] Recording config/oauth with fingerprint=137df655c0ee09e4b75aefd8479c2b51080b0ee228d5df673b3e98eccc4dea83 I0429 08:47:58.176174 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 41.487291ms to process 1 records I0429 08:47:58.176269 1 tasks_processing.go:74] worker 39 stopped. I0429 08:47:58.176656 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=5212b6d4b3c8e2915c9681963c7d28a26983d150c8a5ab192830b85b538c910e I0429 08:47:58.176863 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=b344f212b80c5619b7146b41c43743a9e9c161c2789ac59999a80364ae695eca I0429 08:47:58.176872 1 gather.go:177] gatherer "clusterconfig" function "crds" took 43.533493ms to process 2 records I0429 08:47:58.176956 1 tasks_processing.go:74] worker 58 stopped. I0429 08:47:58.177151 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=a76bd4d92db8905d211e33724b81e70aa2fb3ded34746daa8a3831f941cf4af8 I0429 08:47:58.177160 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 44.547258ms to process 1 records I0429 08:47:58.177174 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0429 08:47:58.177182 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 43.214635ms to process 1 records I0429 08:47:58.177189 1 tasks_processing.go:74] worker 42 stopped. I0429 08:47:58.177527 1 tasks_processing.go:74] worker 10 stopped. I0429 08:47:58.177614 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=41f136997af7ad03483ef725af954681ec7111752a48b9fcb571062727e678f3 I0429 08:47:58.177632 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=bd1e56d28fa682554cd5155816a64e00f04218985b0cc4cf1486a277d28b8f7a I0429 08:47:58.177639 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 41.807152ms to process 2 records I0429 08:47:58.177953 1 tasks_processing.go:74] worker 61 stopped. I0429 08:47:58.177970 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 45.884363ms to process 0 records I0429 08:47:58.178121 1 tasks_processing.go:74] worker 51 stopped. I0429 08:47:58.178179 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=43fe10528e6d35027532249a177c3b03131ee576f38a5908c11bf1f06e82eda4 I0429 08:47:58.178228 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=d16c73fa470efffd3258c4990ddc2b60b60b8862d2835039b528dbaf23f5a4d5 I0429 08:47:58.178260 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=2fb3a00a5b8a525f8694a5372d16d04d0c7ffdbd54a194afce1259155076434f I0429 08:47:58.178271 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 42.261437ms to process 3 records I0429 08:47:58.178591 1 tasks_processing.go:74] worker 22 stopped. I0429 08:47:58.178758 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=26d7a115ff1aeaf257d03275184c31aad60ddd0d9abe439f40a9287ca4c9fa5d I0429 08:47:58.178891 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=d2d31da554fc2f4176cedeb161826e017c45ec3d8bc3d30839042990200bec08 I0429 08:47:58.178919 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=cf863e31f43cc8edf3c6b1562a165636d5634ec2d0f938578fa0c2add5c9643a I0429 08:47:58.178958 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=53121559989f33de3936cc6a4ed8acb5b152559ff5fbe611b04e2688ad871d0c I0429 08:47:58.178995 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=dff54154cc78430dd24ac71e471d0cffbce82d04ad55fd503e753c00bb7f9b23 I0429 08:47:58.179034 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=81ec7102be6ccaec727b6d54c45ee443d0b06066a91e0b952a513f475ad15c88 I0429 08:47:58.179080 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=435aed67622e39cc973e1141aa56d2733f075515e6902cc0fd55aba856f63b4a I0429 08:47:58.179133 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=b1de90035528390a91745d630b5c2913bb1eacce2308f9e94c94833c1f6789a1 I0429 08:47:58.179171 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=6d8fe9b7564135de881e59c05ba1ef20adb702fbae8ef3b18135f2829b0149ec I0429 08:47:58.179209 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=3a54665f17c4134636281efe56e4e9554ac49996087d78d9b9a375c4b016ba1a I0429 08:47:58.179249 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=096d608b8375dbd4f72c51800511f32415561c159d228ff0ae75046170d5b10c I0429 08:47:58.179261 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 42.907698ms to process 11 records I0429 08:47:58.181209 1 tasks_processing.go:74] worker 49 stopped. I0429 08:47:58.181674 1 recorder.go:75] Recording config/node/ip-10-0-0-139.ec2.internal with fingerprint=822785479ce22cbb3b7604d2d87aabcd947dcfa72a62cf3403be0363bd074d0f I0429 08:47:58.181947 1 recorder.go:75] Recording config/node/ip-10-0-1-234.ec2.internal with fingerprint=a250f040481531c45567c6f7dd1e34ba963120480156b1707ab74e764b80c429 I0429 08:47:58.182090 1 recorder.go:75] Recording config/node/ip-10-0-2-209.ec2.internal with fingerprint=51fb7e9a26079183b91f9d0b41444b860bd8f7eae6223cd03bd35457c7a549a4 I0429 08:47:58.182105 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 47.783252ms to process 3 records I0429 08:47:58.186785 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0429 08:47:58.186787 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0429 08:47:58.186979 1 operator.go:288] started I0429 08:47:58.187012 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0429 08:47:58.189888 1 tasks_processing.go:74] worker 33 stopped. I0429 08:47:58.189907 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 56.875063ms to process 0 records I0429 08:47:58.196015 1 tasks_processing.go:74] worker 7 stopped. I0429 08:47:58.196277 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=3b75a1ebb6ae27d0a66a375cba5835941cd1ce6c42941742d22723805178041a I0429 08:47:58.196427 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=ee130616b6b98333966c30dda30d5ceb816c53922685bbb06fafb05c6901b27a I0429 08:47:58.196442 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 62.058181ms to process 2 records I0429 08:47:58.196479 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 61.335644ms to process 0 records I0429 08:47:58.196497 1 tasks_processing.go:74] worker 41 stopped. I0429 08:47:58.197637 1 tasks_processing.go:74] worker 26 stopped. I0429 08:47:58.198939 1 recorder.go:75] Recording config/pod/openshift-multus/multus-7xbbs with fingerprint=4971c48fc43882f868d5fa43e2e6069ad884ad22b50bd5a82f9888e98a1feb40 I0429 08:47:58.199041 1 recorder.go:75] Recording config/pod/openshift-multus/multus-pj9tl with fingerprint=8e9b0796383d97fc4a3636ce0153f7dc99451ef8f712974993c640a2a9797f10 I0429 08:47:58.199136 1 recorder.go:75] Recording config/pod/openshift-multus/multus-x8xdx with fingerprint=efbfede8fbf74846237605989750ff483c19142cbb35f9aa5971acf184cae0d7 I0429 08:47:58.199370 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5 with fingerprint=391c09fe411b28661c55f50202f83e261f8cbaaa4f7e43af345191b9279ca175 I0429 08:47:58.199413 1 recorder.go:75] Recording config/running_containers with fingerprint=539991b77eebf556ed4345db17d38c291c39907ecfeb2bfaf3cde32daa00732f I0429 08:47:58.199422 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 63.491878ms to process 5 records I0429 08:47:58.199434 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 65.853472ms to process 0 records I0429 08:47:58.199440 1 tasks_processing.go:74] worker 29 stopped. I0429 08:47:58.206097 1 controller.go:212] Source scaController *sca.Controller is not ready I0429 08:47:58.206112 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0429 08:47:58.206117 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0429 08:47:58.206122 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0429 08:47:58.206125 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0429 08:47:58.206146 1 controller.go:489] The operator is still being initialized I0429 08:47:58.206153 1 controller.go:512] The operator is healthy I0429 08:47:58.207845 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0429 08:47:58.207862 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0429 08:47:58.207883 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0429 08:47:58.211025 1 base_controller.go:82] Caches are synced for ConfigController I0429 08:47:58.211042 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... E0429 08:47:58.212319 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27130d821b-04ca-4408-bb13-d6a33fecb1a9%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:32773->172.30.0.10:53: read: connection refused I0429 08:47:58.212331 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27130d821b-04ca-4408-bb13-d6a33fecb1a9%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:32773->172.30.0.10:53: read: connection refused I0429 08:47:58.216124 1 prometheus_rules.go:88] Prometheus rules successfully created I0429 08:47:58.227482 1 configmapobserver.go:84] configmaps "insights-config" not found I0429 08:47:58.227999 1 tasks_processing.go:74] worker 50 stopped. E0429 08:47:58.228017 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0429 08:47:58.228030 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0429 08:47:58.228036 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0429 08:47:58.228048 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0429 08:47:58.228087 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0429 08:47:58.228097 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0429 08:47:58.228103 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0429 08:47:58.228109 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0429 08:47:58.228170 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0429 08:47:58.228180 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0429 08:47:58.228188 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 95.97206ms to process 7 records I0429 08:47:58.253679 1 tasks_processing.go:74] worker 63 stopped. I0429 08:47:58.253929 1 recorder.go:75] Recording config/version with fingerprint=d34611fafb43285bf4b56abf83a6e59e8dffcbc3515a01e6b3cad4111ce76bda I0429 08:47:58.253941 1 recorder.go:75] Recording config/id with fingerprint=8ec22cb1e96d397cc26c23a1ccbd2603e874c21235528f85daa0ca11018dec28 I0429 08:47:58.253947 1 gather.go:177] gatherer "clusterconfig" function "version" took 117.768914ms to process 2 records I0429 08:47:58.263219 1 tasks_processing.go:74] worker 8 stopped. I0429 08:47:58.263244 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0429 08:47:58.263252 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 129.272127ms to process 1 records I0429 08:47:58.271437 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0429 08:47:58.274713 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:42638->172.30.0.10:53: read: connection refused I0429 08:47:58.274725 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:42638->172.30.0.10:53: read: connection refused I0429 08:47:58.287180 1 base_controller.go:82] Caches are synced for LoggingSyncer I0429 08:47:58.287191 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0429 08:47:58.293118 1 tasks_processing.go:74] worker 60 stopped. E0429 08:47:58.293134 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0429 08:47:58.293140 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2pvn45h9mslgf17m9qq3vq0k3fiar789-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2pvn45h9mslgf17m9qq3vq0k3fiar789-primary-cert-bundle-secret" not found I0429 08:47:58.293184 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=5aed60b4fbe2e6551c364808273e6ab92ed49e02c1729e2124998367b665dd98 I0429 08:47:58.293197 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 161.105918ms to process 1 records W0429 08:47:59.149304 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0429 08:47:59.602741 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found W0429 08:48:00.150384 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0429 08:48:00.206322 1 tasks_processing.go:74] worker 18 stopped. I0429 08:48:00.206372 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=05665cb000b13dd51caf9a173e6e48ed0560271d087e40f678ac598e59e000a9 I0429 08:48:00.206405 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=c941f5862357f64fc1d8797356944575c66e17a3babbef0cdf4907970f6aacc5 I0429 08:48:00.206474 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0429 08:48:00.206503 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=44210e415ffddbf0b89aa1bbbb26567a3435b5dbae236d44cfec248161520732 I0429 08:48:00.206521 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0429 08:48:00.206543 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=5376f73e685695d518d6c7a782f2e1b54d1aefc08abaebdb89e1a11d4cecbe8b I0429 08:48:00.206576 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=3c79a3b791ee2211ccede830b6e088f045acd90d71fd7e32409063589a5a3878 I0429 08:48:00.206599 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=181c83aeb66924a8a1239d1f8a1f28954995cd8f64483b4762724e811b095a83 I0429 08:48:00.206614 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=e1527fb29fbed1f3d5acfb3db817650c8f80a36360b3ebb277317c14a3c2d55f I0429 08:48:00.206631 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=61724f82c0a12d95457b43fbfdd0a00fabd0cd0663421701404b7acf820cf6eb I0429 08:48:00.206640 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0429 08:48:00.206657 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=a496376ec44d5e2716a9ef5d1b4cee2e529dd738249723cc97a6130d3b1ea1f5 I0429 08:48:00.206668 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0429 08:48:00.206683 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=f5e8de0df225690962b784829c41248fe1dc2389da2864c6921f10f5b46c9807 I0429 08:48:00.206693 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0429 08:48:00.206706 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=e16303401cee04718e5863b804ec442d17d266b37ea03b2e4cbf941023fe01e6 I0429 08:48:00.206715 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0429 08:48:00.206730 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=65a9e23ac53010a70b96360664354e5ecfbf27e144956e04eaaa546c18aea400 I0429 08:48:00.206869 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=a4de104f02154e6f987ae62f2330ee5ff9428dd721f93749454a5a81d5396cd4 I0429 08:48:00.206881 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0429 08:48:00.206888 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0429 08:48:00.206908 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0429 08:48:00.206930 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=c9275664f1199408a59fd16454f5e70ca206ca581fabb23d19cf97cdebeb0ee0 I0429 08:48:00.206962 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=625988d92e67e83fc9b5a8ce60c9ce32d04c53f646c0951adf4b242255641d0c I0429 08:48:00.206973 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0429 08:48:00.206989 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=fffac7f8d2a8f9f3dc63725ee4498ac2ecb3f32c0323537a9277b60dfd385f2a I0429 08:48:00.206998 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0429 08:48:00.207011 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=8dbfc2626c8df983f76b48e7367597104da3bdbc3834e094638a4f9b49ba9e10 I0429 08:48:00.207027 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=0233a8f2808013764c8d67585d1585704296fe4bb5896794851454738aae6708 I0429 08:48:00.207042 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=be13dcb85b3ab86fbb3912eb0d7f2377ef52736dcdc8be46e964f617c7a03114 I0429 08:48:00.207055 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=d386c2799a5e94862f7b4db31f736c119d316247019bf7798d9e9a7948e0e25e I0429 08:48:00.207077 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=8bf411503dcc217cb1cd019a5cb20ef3a1f95ae63f7e4f1277059cd41a50dd1b I0429 08:48:00.207087 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0429 08:48:00.207115 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=b76b98ba700c72368335daea9cee260429d7179d8d76602eeb2906fb730cdf46 I0429 08:48:00.207132 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0429 08:48:00.207141 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0429 08:48:00.207148 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.074296159s to process 36 records W0429 08:48:01.150704 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0429 08:48:01.786527 1 gather_cluster_operator_pods_and_events.go:121] Found 37 pods with 81 containers I0429 08:48:01.786545 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 310689 bytes I0429 08:48:01.786770 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-m669d pod in namespace openshift-dns (previous: false). I0429 08:48:02.018972 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-m669d pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-m669d\" is waiting to start: ContainerCreating" I0429 08:48:02.018989 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-m669d\" is waiting to start: ContainerCreating" I0429 08:48:02.018997 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-m669d pod in namespace openshift-dns (previous: false). W0429 08:48:02.149732 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0429 08:48:02.191047 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-m669d pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-m669d\" is waiting to start: ContainerCreating" I0429 08:48:02.191064 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-m669d\" is waiting to start: ContainerCreating" I0429 08:48:02.191076 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-sw2m8 pod in namespace openshift-dns (previous: false). I0429 08:48:02.409331 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-sw2m8 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-sw2m8\" is waiting to start: ContainerCreating" I0429 08:48:02.409348 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-sw2m8\" is waiting to start: ContainerCreating" I0429 08:48:02.409358 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-sw2m8 pod in namespace openshift-dns (previous: false). I0429 08:48:02.589567 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-sw2m8 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-sw2m8\" is waiting to start: ContainerCreating" I0429 08:48:02.589584 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-sw2m8\" is waiting to start: ContainerCreating" I0429 08:48:02.589607 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-4bw4z pod in namespace openshift-dns (previous: false). I0429 08:48:02.815250 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:02.815270 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-9scvq pod in namespace openshift-dns (previous: false). I0429 08:48:02.991042 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:02.991062 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-hsjlk pod in namespace openshift-dns (previous: false). W0429 08:48:03.150233 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0429 08:48:03.150260 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0429 08:48:03.150276 1 tasks_processing.go:74] worker 43 stopped. E0429 08:48:03.150290 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0429 08:48:03.150304 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0429 08:48:03.150326 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0429 08:48:03.150344 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.018325685s to process 1 records I0429 08:48:03.189981 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:03.190034 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-5c956cbd7b-2fdmt pod in namespace openshift-image-registry (previous: false). I0429 08:48:03.388985 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-5c956cbd7b-2fdmt pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-5c956cbd7b-2fdmt\" is waiting to start: ContainerCreating" I0429 08:48:03.389000 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-5c956cbd7b-2fdmt\" is waiting to start: ContainerCreating" I0429 08:48:03.389044 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-5c956cbd7b-jbjrd pod in namespace openshift-image-registry (previous: false). I0429 08:48:03.589905 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-5c956cbd7b-jbjrd pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-5c956cbd7b-jbjrd\" is waiting to start: ContainerCreating" I0429 08:48:03.589924 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-5c956cbd7b-jbjrd\" is waiting to start: ContainerCreating" I0429 08:48:03.589969 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-646d459c6c-7hwz4 pod in namespace openshift-image-registry (previous: false). I0429 08:48:03.785665 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:03.785686 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-95d9f pod in namespace openshift-image-registry (previous: false). I0429 08:48:03.989829 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:03.989845 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-lcxhk pod in namespace openshift-image-registry (previous: false). I0429 08:48:04.190956 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:04.190972 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-sk6nh pod in namespace openshift-image-registry (previous: false). I0429 08:48:04.390582 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:04.390633 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7d554d5549-fz8r8 pod in namespace openshift-ingress (previous: false). I0429 08:48:04.594217 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7d554d5549-fz8r8 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7d554d5549-fz8r8\" is waiting to start: ContainerCreating" I0429 08:48:04.594231 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7d554d5549-fz8r8\" is waiting to start: ContainerCreating" I0429 08:48:04.594263 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-bcd4b6f46-gthmx pod in namespace openshift-ingress (previous: false). I0429 08:48:04.800667 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-bcd4b6f46-gthmx pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-bcd4b6f46-gthmx\" is waiting to start: ContainerCreating" I0429 08:48:04.800683 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-bcd4b6f46-gthmx\" is waiting to start: ContainerCreating" I0429 08:48:04.800710 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-bcd4b6f46-htg5w pod in namespace openshift-ingress (previous: false). I0429 08:48:04.989728 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-bcd4b6f46-htg5w pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-bcd4b6f46-htg5w\" is waiting to start: ContainerCreating" I0429 08:48:04.989743 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-bcd4b6f46-htg5w\" is waiting to start: ContainerCreating" I0429 08:48:04.989756 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-9pdnn pod in namespace openshift-ingress-canary (previous: false). I0429 08:48:05.190532 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-9pdnn pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-9pdnn\" is waiting to start: ContainerCreating" I0429 08:48:05.190547 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-9pdnn\" is waiting to start: ContainerCreating" I0429 08:48:05.190558 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-mqspr pod in namespace openshift-ingress-canary (previous: false). I0429 08:48:05.390676 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-mqspr pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-mqspr\" is waiting to start: ContainerCreating" I0429 08:48:05.390691 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-mqspr\" is waiting to start: ContainerCreating" I0429 08:48:05.390728 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-7xbbs pod in namespace openshift-multus (previous: true). I0429 08:48:05.588788 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-7xbbs pod in namespace openshift-multus (previous: false). I0429 08:48:05.788933 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:05.989994 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:06.207312 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:06.389466 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:06.589998 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:06.792979 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:06.990251 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-fsz2m pod in namespace openshift-multus (previous: false). I0429 08:48:07.190247 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:07.190268 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:07.393112 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:07.591080 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:07.790466 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:07.998028 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:08.194675 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:08.391732 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-sqb82 pod in namespace openshift-multus (previous: false). I0429 08:48:08.590247 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:08.590267 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:08.788780 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:08.989679 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:09.189566 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:09.388824 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:09.589226 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:09.789307 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-v26wf pod in namespace openshift-multus (previous: false). I0429 08:48:09.988798 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0429 08:48:09.988834 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-pj9tl pod in namespace openshift-multus (previous: true). I0429 08:48:10.191403 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-pj9tl pod in namespace openshift-multus (previous: false). I0429 08:48:10.396406 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-x8xdx pod in namespace openshift-multus (previous: true). I0429 08:48:10.584051 1 tasks_processing.go:74] worker 46 stopped. I0429 08:48:10.584089 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0429 08:48:10.584101 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.44988066s to process 1 records I0429 08:48:10.590072 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-x8xdx pod in namespace openshift-multus (previous: false). I0429 08:48:10.791838 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-2jttl pod in namespace openshift-multus (previous: false). I0429 08:48:10.988870 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-2jttl pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-2jttl\" is waiting to start: ContainerCreating" I0429 08:48:10.988892 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-2jttl\" is waiting to start: ContainerCreating" I0429 08:48:10.988904 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-2jttl pod in namespace openshift-multus (previous: false). I0429 08:48:11.188252 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-2jttl pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-2jttl\" is waiting to start: ContainerCreating" I0429 08:48:11.188270 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-2jttl\" is waiting to start: ContainerCreating" I0429 08:48:11.188299 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-ddsxt pod in namespace openshift-multus (previous: false). I0429 08:48:11.340738 1 tasks_processing.go:74] worker 30 stopped. I0429 08:48:11.341006 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=6f02bbed9776fbc53f719d04062abe458034f9ac95b2963e439b1381b6c8e3c4 I0429 08:48:11.341022 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.206860307s to process 1 records I0429 08:48:11.349326 1 configmapobserver.go:84] configmaps "insights-config" not found I0429 08:48:11.405171 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-ddsxt pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-ddsxt\" is waiting to start: ContainerCreating" I0429 08:48:11.405184 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-ddsxt\" is waiting to start: ContainerCreating" I0429 08:48:11.405192 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-ddsxt pod in namespace openshift-multus (previous: false). I0429 08:48:11.590592 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-ddsxt pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-ddsxt\" is waiting to start: ContainerCreating" I0429 08:48:11.590613 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-ddsxt\" is waiting to start: ContainerCreating" I0429 08:48:11.590655 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-lh9ht pod in namespace openshift-multus (previous: false). I0429 08:48:11.789161 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-lh9ht pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-lh9ht\" is waiting to start: ContainerCreating" I0429 08:48:11.789177 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-lh9ht\" is waiting to start: ContainerCreating" I0429 08:48:11.789187 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-lh9ht pod in namespace openshift-multus (previous: false). I0429 08:48:11.989400 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-lh9ht pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-lh9ht\" is waiting to start: ContainerCreating" I0429 08:48:11.989416 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-lh9ht\" is waiting to start: ContainerCreating" I0429 08:48:11.989428 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:12.193018 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:12.392763 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:12.593315 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:12.792733 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:12.994174 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:13.191370 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:13.249337 1 configmapobserver.go:84] configmaps "insights-config" not found I0429 08:48:13.390528 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-4669z pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:13.446510 1 configmapobserver.go:84] configmaps "insights-config" not found I0429 08:48:13.593688 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:13.789654 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator ovn-controller (previous: true): "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:13.789672 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:13.789681 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:13.989148 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator ovn-acl-logging (previous: true): "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:13.989164 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:13.989172 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:14.189029 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-node (previous: true): "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.189044 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.189053 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:14.388964 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-ovn-metrics (previous: true): "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.388982 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.388992 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:14.588770 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator northd (previous: true): "previous terminated container \"northd\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.588786 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"northd\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.588795 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:14.788702 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator nbdb (previous: true): "previous terminated container \"nbdb\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.788719 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"nbdb\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.788728 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:14.988898 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes for failing operator sbdb (previous: true): "previous terminated container \"sbdb\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.988913 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"sbdb\" in pod \"ovnkube-node-dmqw5\" not found" I0429 08:48:14.988923 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: true). I0429 08:48:15.190898 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:15.390525 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:15.590222 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:15.790827 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:15.990969 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:16.190103 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:16.389394 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:16.589437 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-dmqw5 pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:16.789653 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:16.993408 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:17.192103 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:17.391949 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:17.591785 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:17.799520 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:17.992493 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:18.191141 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-tqqdh pod in namespace openshift-ovn-kubernetes (previous: false). I0429 08:48:18.392253 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for check-endpoints container network-check-source-6b8cd5b79b-j4s7t pod in namespace openshift-network-diagnostics (previous: false). I0429 08:48:18.591163 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-6h257 pod in namespace openshift-network-diagnostics (previous: false). I0429 08:48:18.790482 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-6h257 pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-6h257\" is waiting to start: ContainerCreating" I0429 08:48:18.790500 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-6h257\" is waiting to start: ContainerCreating" I0429 08:48:18.790530 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-8ghsw pod in namespace openshift-network-diagnostics (previous: false). I0429 08:48:18.989933 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-8ghsw pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-8ghsw\" is waiting to start: ContainerCreating" I0429 08:48:18.989951 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-8ghsw\" is waiting to start: ContainerCreating" I0429 08:48:18.990009 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-r8v5g pod in namespace openshift-network-diagnostics (previous: false). I0429 08:48:19.188502 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-r8v5g pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-r8v5g\" is waiting to start: ContainerCreating" I0429 08:48:19.188519 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-r8v5g\" is waiting to start: ContainerCreating" I0429 08:48:19.188545 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-bvqxv pod in namespace openshift-network-console (previous: false). I0429 08:48:19.390470 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-bvqxv pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-bvqxv\" is waiting to start: ContainerCreating" I0429 08:48:19.390487 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-bvqxv\" is waiting to start: ContainerCreating" I0429 08:48:19.390513 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-ls7vd pod in namespace openshift-network-console (previous: false). I0429 08:48:19.590642 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-ls7vd pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-ls7vd\" is waiting to start: ContainerCreating" I0429 08:48:19.590657 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-ls7vd\" is waiting to start: ContainerCreating" I0429 08:48:19.590671 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-jwb98 pod in namespace openshift-network-operator (previous: false). I0429 08:48:19.790179 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-nxl62 pod in namespace openshift-network-operator (previous: false). I0429 08:48:19.988501 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-rkjlz pod in namespace openshift-network-operator (previous: false). I0429 08:48:20.189999 1 tasks_processing.go:74] worker 27 stopped. I0429 08:48:20.190104 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=ae8a0d709b0714c215395fe9ca27f67aa86a9c1a3bcd18a0ea10957c6908fdad I0429 08:48:20.190150 1 recorder.go:75] Recording events/openshift-dns with fingerprint=73c758e75b407a74a49c829c83bb1626fe584aa7354a96ac19d30f48814146ac I0429 08:48:20.190230 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=9acbe08471ee4fe7ea649612846ed659a2bd3ec5b0b1ee4fc2e46e510931853b I0429 08:48:20.190256 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=d152a2721386f6741982e551885a03bde31f5ebfddf642a2a368fd1329d0325f I0429 08:48:20.190302 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=f5066f30e5475a64d0b733fcbb9eb01cd16021f846997e89c932a302959f7d61 I0429 08:48:20.190315 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=56a45591544286483a98e16e5f1959f8e19261968094b7182198ac683753cdf4 I0429 08:48:20.190501 1 recorder.go:75] Recording events/openshift-multus with fingerprint=aa7c17dfed03744347926d30a7af96100e84e87c477f88643fa75688ec4fd4d9 I0429 08:48:20.190627 1 recorder.go:75] Recording events/openshift-ovn-kubernetes with fingerprint=268a673ef08c1f03a1bc8c0b1a01cce5a7ce43ba0595b573bf4eaa9ed258bb3e I0429 08:48:20.190667 1 recorder.go:75] Recording events/openshift-network-diagnostics with fingerprint=665bb663809c04124978f4211cfabb49c9321a1af8944bc75fc47882ec4b43fc I0429 08:48:20.190676 1 recorder.go:75] Recording events/openshift-network-node-identity with fingerprint=74bc241636e75ece285278b922c5d5a3244ee56564625612124d77b12a6d59e1 I0429 08:48:20.190693 1 recorder.go:75] Recording events/openshift-network-console with fingerprint=8be49c018ea034254ec1f9abf2f023aed6977541b70cec50b0074b726805e2c7 I0429 08:48:20.190757 1 recorder.go:75] Recording events/openshift-network-operator with fingerprint=c5e602ca503fd2ad1c9d33956c46b33d131761067ea612c9919ae5cad87fde25 I0429 08:48:20.190922 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-5c956cbd7b-2fdmt with fingerprint=a0d97944834eec6890df04ecd285b8558e96444898a0a095ceff51b230628817 I0429 08:48:20.191046 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-5c956cbd7b-jbjrd with fingerprint=1d4a9f80040499ee46a0322f2c5a786cc08f801b3edf1c913a1aea1d186c9cea I0429 08:48:20.191125 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-646d459c6c-7hwz4 with fingerprint=adf328c6b71e344f2ea3f31f5c48850e9fae90575184e45fb60d978393c18bf5 I0429 08:48:20.191218 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-7d554d5549-fz8r8 with fingerprint=cf78b2652c6def47bfb83528be5408bf1d5fca6ca7027aa7ff3ac19cc7f8a156 I0429 08:48:20.191300 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-bcd4b6f46-gthmx with fingerprint=2cbe69b6a34146d6ff9517f34441593c69c51549cb17dc7442e1d8659db28108 I0429 08:48:20.191396 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-bcd4b6f46-htg5w with fingerprint=c1c91ff4ea1d0deebe1c455ff26cac925e7128bfac1cca8fea4539436ee523b2 I0429 08:48:20.191511 1 recorder.go:75] Recording config/pod/openshift-multus/multus-7xbbs with fingerprint=4971c48fc43882f868d5fa43e2e6069ad884ad22b50bd5a82f9888e98a1feb40 E0429 08:48:20.191527 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-7xbbs.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-7xbbs.json" was already recorded and had the fingerprint "4971c48fc43882f868d5fa43e2e6069ad884ad22b50bd5a82f9888e98a1feb40", overwriting with the record having fingerprint "4971c48fc43882f868d5fa43e2e6069ad884ad22b50bd5a82f9888e98a1feb40" W0429 08:48:20.191540 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-7xbbs.json" because of the warning: warning: the record with the same fingerprint "4971c48fc43882f868d5fa43e2e6069ad884ad22b50bd5a82f9888e98a1feb40" was already recorded at path "config/pod/openshift-multus/multus-7xbbs.json", recording another one with a different path "config/pod/openshift-multus/multus-7xbbs.json" I0429 08:48:20.191558 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-7xbbs/kube-multus_previous.log with fingerprint=cb037a9247bacb04b51fd0eca201fc93c0ae2cea5411fed302dacdd2cdb6a1b2 I0429 08:48:20.191580 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-7xbbs/kube-multus_current.log with fingerprint=cb037a9247bacb04b51fd0eca201fc93c0ae2cea5411fed302dacdd2cdb6a1b2 W0429 08:48:20.191591 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/logs/multus-7xbbs/kube-multus_current.log" because of the warning: warning: the record with the same fingerprint "cb037a9247bacb04b51fd0eca201fc93c0ae2cea5411fed302dacdd2cdb6a1b2" was already recorded at path "config/pod/openshift-multus/logs/multus-7xbbs/kube-multus_previous.log", recording another one with a different path "config/pod/openshift-multus/logs/multus-7xbbs/kube-multus_current.log" I0429 08:48:20.191606 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/egress-router-binary-copy_current.log with fingerprint=950ddddf0ba6e809df62c0cb5e6a1a53fcbffeca9efecbe0d796741ee177887e I0429 08:48:20.191629 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/cni-plugins_current.log with fingerprint=227921ce34251e5cfc39596b2f4362b87b38586e758c415d8a891ead7b09f6c8 I0429 08:48:20.191637 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/bond-cni-plugin_current.log with fingerprint=a742741265996db3af779a6195865f71a72300772861e0892de72beb44ad3a58 I0429 08:48:20.191642 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/routeoverride-cni_current.log with fingerprint=0706dfcc301c73f275699dd36a1708da045c440af3277fa6093bc9fe336c5916 I0429 08:48:20.191648 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/whereabouts-cni-bincopy_current.log with fingerprint=6c791c28870d9a28ac3233b52dbd716ddbf6a60c594b9a2a18f59edd461d7837 I0429 08:48:20.191652 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-fsz2m/whereabouts-cni_current.log with fingerprint=e79213c3d5f4de25210fae533a2565bc392d5e9cf8cd33786d4dec7751c048dc I0429 08:48:20.191657 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/egress-router-binary-copy_current.log with fingerprint=b018f6a8d05918138970dc77517c0b3614b5ede231e1d3ad640d4051d93d6509 I0429 08:48:20.191661 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/cni-plugins_current.log with fingerprint=8cd042dbefd1091fcbed3d4f478f563bacec56f9451f9d22ce3887f5d18c5da0 I0429 08:48:20.191666 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/bond-cni-plugin_current.log with fingerprint=c5bb1cd337f60dc0a361870b3eda6dbfe33d6cc68e361cc24b9fdcea92494606 I0429 08:48:20.191672 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/routeoverride-cni_current.log with fingerprint=a4f793ea0b4c00ad6a26bfb553c030c982c2277bf0d2c08c57e136d5a4dbc9b4 I0429 08:48:20.191677 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/whereabouts-cni-bincopy_current.log with fingerprint=c662c35ae4f647aaefb6a2f28decce0e8f21cce3e563e8310df93d9fd232bd14 I0429 08:48:20.191681 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-sqb82/whereabouts-cni_current.log with fingerprint=43f328bc9eefe01a1e748de8d850ae3ad7261e50e44746569dd1b1b24a1c118d I0429 08:48:20.191689 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/egress-router-binary-copy_current.log with fingerprint=519d2a757204d7edfb5757999bcc3512b752c94370390778d6ef875cd82bbf0c I0429 08:48:20.191694 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/cni-plugins_current.log with fingerprint=cda3e46c9195a691a85dee32e616b4310f1678cfab748a7b654c78a7152d34b0 I0429 08:48:20.191705 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/bond-cni-plugin_current.log with fingerprint=f5a90ce00ac73970926545bad2f5333fc420195b8ac3d7940f299dddaa915205 I0429 08:48:20.191710 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/routeoverride-cni_current.log with fingerprint=d8162fae00b730a4945fb16926e6d07c16bf28e07e78d84729a188ffb8fc2a47 I0429 08:48:20.191715 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/whereabouts-cni-bincopy_current.log with fingerprint=8270735171058699426d6bb6f7bbed136a30dbd4dc485cae27ae32fbc9c088af I0429 08:48:20.191718 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-v26wf/whereabouts-cni_current.log with fingerprint=6fd09a6fae5ff78935e8f06ea6917c8ee9e08fa751b41ef1714cfcf1e5edcea4 I0429 08:48:20.191813 1 recorder.go:75] Recording config/pod/openshift-multus/multus-pj9tl with fingerprint=8e9b0796383d97fc4a3636ce0153f7dc99451ef8f712974993c640a2a9797f10 E0429 08:48:20.191824 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-pj9tl.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-pj9tl.json" was already recorded and had the fingerprint "8e9b0796383d97fc4a3636ce0153f7dc99451ef8f712974993c640a2a9797f10", overwriting with the record having fingerprint "8e9b0796383d97fc4a3636ce0153f7dc99451ef8f712974993c640a2a9797f10" W0429 08:48:20.191832 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-pj9tl.json" because of the warning: warning: the record with the same fingerprint "8e9b0796383d97fc4a3636ce0153f7dc99451ef8f712974993c640a2a9797f10" was already recorded at path "config/pod/openshift-multus/multus-pj9tl.json", recording another one with a different path "config/pod/openshift-multus/multus-pj9tl.json" I0429 08:48:20.191843 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-pj9tl/kube-multus_previous.log with fingerprint=0f0d21429323f08131fda5df532cd282105700c76e34e73df31d7ebf4082ae2c I0429 08:48:20.191945 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-pj9tl/kube-multus_current.log with fingerprint=54b5ca99eb8e265ff97e2f80e9fd26a603006c81a06a3d0782eae078481026e8 I0429 08:48:20.192043 1 recorder.go:75] Recording config/pod/openshift-multus/multus-x8xdx with fingerprint=efbfede8fbf74846237605989750ff483c19142cbb35f9aa5971acf184cae0d7 E0429 08:48:20.192051 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-x8xdx.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-x8xdx.json" was already recorded and had the fingerprint "efbfede8fbf74846237605989750ff483c19142cbb35f9aa5971acf184cae0d7", overwriting with the record having fingerprint "efbfede8fbf74846237605989750ff483c19142cbb35f9aa5971acf184cae0d7" W0429 08:48:20.192059 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-x8xdx.json" because of the warning: warning: the record with the same fingerprint "efbfede8fbf74846237605989750ff483c19142cbb35f9aa5971acf184cae0d7" was already recorded at path "config/pod/openshift-multus/multus-x8xdx.json", recording another one with a different path "config/pod/openshift-multus/multus-x8xdx.json" I0429 08:48:20.192066 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-x8xdx/kube-multus_previous.log with fingerprint=b5cefac161da4dea4a5eeea6993d3a65cb60b251e6103065dc7e9fb7d02d2795 I0429 08:48:20.192122 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-x8xdx/kube-multus_current.log with fingerprint=d9dbcae0d76d76db2b69c919d28212e686053ec8bc27c1f0155cf48fa2f74419 I0429 08:48:20.192181 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-2jttl with fingerprint=0922102108c9a1c17c97056702f8f034a1114178c171ba5f0c58c05c774e365f I0429 08:48:20.192236 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-ddsxt with fingerprint=02a0df0d605231609b32116814a18c28b1783a4bd27be6a3151a1e34b7171eb8 I0429 08:48:20.192299 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-lh9ht with fingerprint=1bd0baec6b333d6e5620815cf6ebfe4ff13f4bae39594e3394acbcb3212cd0e7 I0429 08:48:20.192357 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/ovn-controller_current.log with fingerprint=1b7ec6bed8595b11eba754744469fab2c97db8b224a1edd372227f753babdff4 I0429 08:48:20.192378 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/ovn-acl-logging_current.log with fingerprint=d1a80696f2b3396e4871656a11c6aeecb6f708e42f77f81f7961284c0a4ecef5 I0429 08:48:20.192405 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/kube-rbac-proxy-node_current.log with fingerprint=9d8aa24fcd08753ab4e0fd10fd050854bf52f92b661ceed900b5dba3b8fc5678 I0429 08:48:20.192428 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=67ea7662f986bf0e594dc2425b009eab207a5191ebf7a2750cc93dbbd45f4bbe I0429 08:48:20.192466 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/northd_current.log with fingerprint=550c8ccd2ffd6ba802af2cab79531ae679b328720b5f2db733c4db5b00f04fa2 I0429 08:48:20.192482 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/nbdb_current.log with fingerprint=a456b539a74e255e59b77db7f261911f978802c29255d3b3740f38c571aed218 I0429 08:48:20.192494 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/sbdb_current.log with fingerprint=ac8f033a3f851707c1bfec6831f134a617af90d0655eb2e54722f63deaafd496 I0429 08:48:20.192581 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-4669z/ovnkube-controller_current.log with fingerprint=cb9d1be6f740028b1286d0a2ab5168a2f6429885137e78511bbdc251ec219f87 I0429 08:48:20.192854 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5 with fingerprint=391c09fe411b28661c55f50202f83e261f8cbaaa4f7e43af345191b9279ca175 E0429 08:48:20.192863 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json" because of the error: the record with the same name "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json" was already recorded and had the fingerprint "391c09fe411b28661c55f50202f83e261f8cbaaa4f7e43af345191b9279ca175", overwriting with the record having fingerprint "391c09fe411b28661c55f50202f83e261f8cbaaa4f7e43af345191b9279ca175" W0429 08:48:20.192871 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json" because of the warning: warning: the record with the same fingerprint "391c09fe411b28661c55f50202f83e261f8cbaaa4f7e43af345191b9279ca175" was already recorded at path "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json", recording another one with a different path "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json" I0429 08:48:20.192940 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/ovnkube-controller_previous.log with fingerprint=25b5df8185f29f6bb6e06a0cafe34c05020a192432412ee8d8e2a27cc4d405a1 I0429 08:48:20.192992 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/ovn-controller_current.log with fingerprint=59bf212d0aa094ae55a9c23d6293a4b93292201b5c1d3e856344b83c6bd6e6df I0429 08:48:20.193020 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/ovn-acl-logging_current.log with fingerprint=8e4c5b860df45e573d836ee57ec60f1bc85ea0d7d9b7c695552ac50e79a29ee8 I0429 08:48:20.193046 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/kube-rbac-proxy-node_current.log with fingerprint=df47e90ef6c0a9d054f84c76e32e872976c0e3aa21278e148109aba750429204 I0429 08:48:20.193074 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=ad51fd01bd4559c30ac645a02359aafdae4c27dce15074fe2522fce83171ca5c I0429 08:48:20.193097 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/northd_current.log with fingerprint=2bc5e0024b4dd8f04a02dbbf857702a28f2c3f9f2a5c54b5e1f9d1f56a55b8eb I0429 08:48:20.193109 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/nbdb_current.log with fingerprint=fa8c61bd1249d7e029bd8f46e4e091644bedab6586e0a2650c1b8230994d25d7 I0429 08:48:20.193128 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/sbdb_current.log with fingerprint=260a761a27e784fb4d3ddee3ff7dfd20b9901afbfd5c6cb6c07414f170af2796 I0429 08:48:20.193205 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-dmqw5/ovnkube-controller_current.log with fingerprint=8e07846409611a3f90474d5071c3cb5b706ab86f6a4a56db1f1f26d25e026e38 I0429 08:48:20.193264 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/ovn-controller_current.log with fingerprint=54966ee5c6ffabae896686f242bb29f2a075f13ee5dfd5533393921b8b388f2d I0429 08:48:20.193287 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/ovn-acl-logging_current.log with fingerprint=7e7677dcf3d7809e1971d21c5d70dd59f2f9692baa54ddcc208e0a54d0bb9ae8 I0429 08:48:20.193314 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/kube-rbac-proxy-node_current.log with fingerprint=63d186965ff3bd3a64b0e5f0e2f0c3f89d5b668151edb9e6633f43cb73a77d1d I0429 08:48:20.193338 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=f62e2ec511cca117fd05e584fd1a6c1bf7b3422679990033cf78a946a90a70f4 I0429 08:48:20.193358 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/northd_current.log with fingerprint=706b4b61be50fa66b7e52a402305f34a8fc98b14d30ea574ad54fcf5ee9477d1 I0429 08:48:20.193372 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/nbdb_current.log with fingerprint=a199e4613e391ff6dd196dcae7fc1cd625b529f4dff291ca386aacf2378f43df I0429 08:48:20.193386 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/sbdb_current.log with fingerprint=d4703477f6abed2ba15a989515567faddf970c56545201d2ab5d43b534831135 I0429 08:48:20.193503 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-tqqdh/ovnkube-controller_current.log with fingerprint=c08b4e2a109aa6a0012a23f86f6681f676b11964d9efecc7f38193f1e1072d53 I0429 08:48:20.193522 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/logs/network-check-source-6b8cd5b79b-j4s7t/check-endpoints_current.log with fingerprint=44507f937bbe2b538488e63e0d2993c3bdb9dad76352263c6e63104d6338bc8d I0429 08:48:20.193601 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-6h257 with fingerprint=e6d50e00b8194bb8b735311a7a7d592328c7ff10bab65f73536ec988e00de07d I0429 08:48:20.193658 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-8ghsw with fingerprint=1e425766452a2c5fe4c5b4b3108bd3526b7fec053f4e18af062b1f72cd9b3ae0 I0429 08:48:20.193710 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-r8v5g with fingerprint=517c95b957ef2d87c09ffc92d71fcd61f7d21ad4fe34534340b9d604dc78c8e7 I0429 08:48:20.193771 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-bvqxv with fingerprint=5b48fef78437cb01068780994db5174a6b9ba4258c85eb1b62a7349de66c28c9 I0429 08:48:20.193832 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-ls7vd with fingerprint=e74335b68fd9a38f4cd867cae404dc34617cae3dcdce5c9b4e7f1683edc77eef I0429 08:48:20.193842 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-jwb98/iptables-alerter_current.log with fingerprint=5635505ade78d69a268b25d9122852174b6edf3d006596cd1379e29f0d7d3be0 I0429 08:48:20.193851 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-nxl62/iptables-alerter_current.log with fingerprint=1c4563304b6e26a5cc845f2dde852077b07d41d46e005592a687b6e9c73d0da7 I0429 08:48:20.193855 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-rkjlz/iptables-alerter_current.log with fingerprint=eec63f8b8bc2966a3c9b829b63fc92038e46bf6a969da9289fd676c14a83e524 I0429 08:48:20.193861 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 22.056019866s to process 83 records E0429 08:48:20.193954 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 22.062s with: function \"pod_network_connectivity_checks\" failed with an error, function \"machines\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error, unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-7xbbs.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-pj9tl.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-x8xdx.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json\"" I0429 08:48:20.195067 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "machine_healthchecks" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error, unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-7xbbs.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-pj9tl.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-x8xdx.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-ovn-kubernetes/ovnkube-node-dmqw5.json" I0429 08:48:20.195085 1 periodic.go:209] Running workloads gatherer I0429 08:48:20.195099 1 tasks_processing.go:45] number of workers: 2 I0429 08:48:20.195105 1 tasks_processing.go:69] worker 1 listening for tasks. I0429 08:48:20.195109 1 tasks_processing.go:71] worker 1 working on workload_info task. I0429 08:48:20.195112 1 tasks_processing.go:69] worker 0 listening for tasks. I0429 08:48:20.195131 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0429 08:48:20.218233 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0429 08:48:20.218411 1 tasks_processing.go:74] worker 0 stopped. I0429 08:48:20.218424 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 23.261163ms to process 0 records I0429 08:48:20.244566 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (27ms) I0429 08:48:20.252994 1 gather_workloads_info.go:387] No image sha256:80748ba08e1c264a8c105e7f607eff386a66378e024443a844993ee9292858c1 (8ms) I0429 08:48:20.260910 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (8ms) I0429 08:48:20.268763 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (8ms) I0429 08:48:20.276335 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (8ms) I0429 08:48:20.285476 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (9ms) I0429 08:48:20.292988 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (8ms) I0429 08:48:20.300304 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (7ms) I0429 08:48:20.307634 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (7ms) I0429 08:48:20.315052 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (7ms) I0429 08:48:20.325963 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (11ms) I0429 08:48:20.427161 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (101ms) I0429 08:48:20.527523 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (100ms) I0429 08:48:20.627229 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (100ms) I0429 08:48:20.733405 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (106ms) I0429 08:48:20.833775 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (100ms) I0429 08:48:20.942352 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (109ms) I0429 08:48:21.042118 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (100ms) I0429 08:48:21.132389 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (90ms) I0429 08:48:21.237506 1 gather_workloads_info.go:387] No image sha256:ae7d3453fd734ecc865e5f9bb16f29244ebffe6291b77e1d4e496f71eb053174 (105ms) I0429 08:48:21.332914 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (95ms) I0429 08:48:21.449018 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (116ms) I0429 08:48:21.552718 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (104ms) I0429 08:48:21.649013 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (96ms) I0429 08:48:21.735476 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (86ms) I0429 08:48:21.830492 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (95ms) I0429 08:48:21.929501 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (99ms) I0429 08:48:22.028301 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (99ms) I0429 08:48:22.052895 1 configmapobserver.go:84] configmaps "insights-config" not found I0429 08:48:22.128712 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (100ms) I0429 08:48:22.226480 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (98ms) I0429 08:48:22.326540 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0429 08:48:22.426558 1 gather_workloads_info.go:387] No image sha256:50197f22710766515f67944a779e00dd9ae3d17b18732d7324a970353b11f292 (100ms) I0429 08:48:22.526890 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (100ms) I0429 08:48:22.526919 1 tasks_processing.go:74] worker 1 stopped. E0429 08:48:22.526930 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0429 08:48:22.527313 1 recorder.go:75] Recording config/workload_info with fingerprint=7f60c39109137e172a0da8b69cec703240e5af75cefba32a288741601d00d96d I0429 08:48:22.527330 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.331802841s to process 1 records E0429 08:48:22.527365 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.332s with: function \"workload_info\" failed with an error" I0429 08:48:22.528484 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0429 08:48:22.528498 1 periodic.go:209] Running conditional gatherer I0429 08:48:22.533607 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0429 08:48:22.540214 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:60031->172.30.0.10:53: read: connection refused E0429 08:48:22.540481 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0429 08:48:22.540539 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0429 08:48:22.545897 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0429 08:48:22.545913 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545918 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545921 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545925 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545928 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545931 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545933 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545937 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0429 08:48:22.545940 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0429 08:48:22.545956 1 tasks_processing.go:45] number of workers: 3 I0429 08:48:22.545975 1 tasks_processing.go:69] worker 2 listening for tasks. I0429 08:48:22.545980 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0429 08:48:22.545978 1 tasks_processing.go:69] worker 0 listening for tasks. I0429 08:48:22.545983 1 tasks_processing.go:69] worker 1 listening for tasks. I0429 08:48:22.545993 1 tasks_processing.go:74] worker 1 stopped. I0429 08:48:22.545995 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0429 08:48:22.546003 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0429 08:48:22.546063 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0429 08:48:22.546075 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 694ns to process 1 records I0429 08:48:22.546105 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0429 08:48:22.546112 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.389µs to process 1 records I0429 08:48:22.546118 1 tasks_processing.go:74] worker 0 stopped. I0429 08:48:22.546252 1 tasks_processing.go:74] worker 2 stopped. I0429 08:48:22.546264 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 240.685µs to process 0 records I0429 08:48:22.546291 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:60031->172.30.0.10:53: read: connection refused I0429 08:48:22.546310 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0429 08:48:22.566853 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=2854e6391f1c58bee95dd0acd4e2e8b5cd7162042085eac51ae8edb4c73798ba I0429 08:48:22.566992 1 diskrecorder.go:70] Writing 177 records to /var/lib/insights-operator/insights-2026-04-29-084822.tar.gz I0429 08:48:22.586476 1 diskrecorder.go:51] Wrote 177 records to disk in 19ms I0429 08:48:22.586515 1 periodic.go:278] Gathering cluster info every 2h0m0s I0429 08:48:22.586535 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0429 08:49:28.132624 1 diskrecorder.go:223] Found files to send: insights-2026-04-29-084822.tar.gz I0429 08:49:28.132919 1 insightsuploader.go:150] Checking archives to upload periodically every 15m46.160056122s I0429 08:49:28.132930 1 insightsuploader.go:165] Uploading latest report since 0001-01-01T00:00:00Z I0429 08:49:28.143424 1 requests.go:46] Uploading application/vnd.redhat.openshift.periodic to https://console.redhat.com/api/ingress/v1/upload I0429 08:49:28.415425 1 requests.go:87] Successfully reported id=2026-04-29T08:49:28Z x-rh-insights-request-id=70384eee2cc041d9ac2d426ff773f2fb, wrote=110887 I0429 08:49:28.415500 1 insightsuploader.go:187] Uploaded report successfully in 282.560232ms I0429 08:49:28.415523 1 controller.go:128] Initializing last reported time to 2026-04-29T08:49:28Z I0429 08:49:28.415593 1 insightsreport.go:304] Archive uploaded, starting pulling report... I0429 08:49:28.415606 1 insightsreport.go:215] Starting retrieving report from Smart Proxy I0429 08:49:28.415614 1 insightsreport.go:221] Initial delay for pulling: 1m0s I0429 08:49:28.420670 1 controller.go:512] The operator is healthy I0429 08:49:32.856145 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="0b2ab047c346ae908eba569771a7b6ba85bf0c02494281a80a32cb6535b689a8") W0429 08:49:32.856176 1 builder.go:160] Restart triggered because of file /var/run/configmaps/service-ca-bundle/service-ca.crt was created I0429 08:49:32.856224 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="e3eea2169c8908d6c82b9b304a6e8b4a1c9934371148d213b2ddc9f27d13515c") I0429 08:49:32.856227 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0429 08:49:32.856253 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0429 08:49:32.856264 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="8515db034a45e680c4eb569a135dcebf901de26c2edd760bc5739d4d4d71a578") I0429 08:49:32.856282 1 periodic.go:170] Shutting down E0429 08:49:32.856305 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled