W0327 15:22:24.167183 1 cmd.go:257] Using insecure, self-signed certificates I0327 15:22:24.858479 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:24.858811 1 observer_polling.go:159] Starting file observer I0327 15:22:25.730128 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0327 15:22:25.730379 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0327 15:22:25.730903 1 secure_serving.go:57] Forcing use of http/1.1 only W0327 15:22:25.730925 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0327 15:22:25.730930 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0327 15:22:25.730935 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0327 15:22:25.730939 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0327 15:22:25.730942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0327 15:22:25.730945 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0327 15:22:25.731038 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0327 15:22:25.735172 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0327 15:22:25.735258 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"325d3f84-3dd3-4f9e-9b58-24fa14dd2703", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0327 15:22:25.739642 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0327 15:22:25.739662 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0327 15:22:25.739664 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0327 15:22:25.739665 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0327 15:22:25.739695 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0327 15:22:25.739710 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0327 15:22:25.739963 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-540797896/tls.crt::/tmp/serving-cert-540797896/tls.key" I0327 15:22:25.740193 1 secure_serving.go:213] Serving securely on [::]:8443 I0327 15:22:25.740236 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0327 15:22:25.740730 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0327 15:22:25.740752 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0327 15:22:25.740857 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0327 15:22:25.746943 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0327 15:22:25.746968 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0327 15:22:25.751650 1 secretconfigobserver.go:119] support secret does not exist I0327 15:22:25.756555 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0327 15:22:25.760968 1 secretconfigobserver.go:119] support secret does not exist I0327 15:22:25.764485 1 recorder.go:161] Pruning old reports every 8h15m13s, max age is 288h0m0s I0327 15:22:25.769657 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0327 15:22:25.769679 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0327 15:22:25.769664 1 periodic.go:209] Running clusterconfig gatherer I0327 15:22:25.769724 1 tasks_processing.go:45] number of workers: 64 I0327 15:22:25.769660 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0327 15:22:25.769737 1 insightsreport.go:296] Starting report retriever I0327 15:22:25.769741 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0327 15:22:25.769744 1 tasks_processing.go:69] worker 0 listening for tasks. I0327 15:22:25.769752 1 tasks_processing.go:69] worker 4 listening for tasks. I0327 15:22:25.769754 1 tasks_processing.go:69] worker 20 listening for tasks. I0327 15:22:25.769757 1 tasks_processing.go:69] worker 8 listening for tasks. I0327 15:22:25.769759 1 tasks_processing.go:69] worker 1 listening for tasks. I0327 15:22:25.769766 1 tasks_processing.go:69] worker 16 listening for tasks. I0327 15:22:25.769760 1 tasks_processing.go:69] worker 15 listening for tasks. I0327 15:22:25.769763 1 tasks_processing.go:69] worker 2 listening for tasks. I0327 15:22:25.769773 1 tasks_processing.go:69] worker 3 listening for tasks. I0327 15:22:25.769774 1 tasks_processing.go:69] worker 5 listening for tasks. I0327 15:22:25.769777 1 tasks_processing.go:69] worker 17 listening for tasks. I0327 15:22:25.769779 1 tasks_processing.go:71] worker 3 working on version task. I0327 15:22:25.769779 1 tasks_processing.go:71] worker 2 working on nodes task. I0327 15:22:25.769783 1 tasks_processing.go:69] worker 6 listening for tasks. I0327 15:22:25.769787 1 tasks_processing.go:69] worker 19 listening for tasks. I0327 15:22:25.769788 1 tasks_processing.go:69] worker 9 listening for tasks. I0327 15:22:25.769793 1 tasks_processing.go:69] worker 10 listening for tasks. I0327 15:22:25.769792 1 tasks_processing.go:69] worker 7 listening for tasks. I0327 15:22:25.769782 1 tasks_processing.go:69] worker 18 listening for tasks. I0327 15:22:25.769801 1 tasks_processing.go:69] worker 12 listening for tasks. I0327 15:22:25.769803 1 tasks_processing.go:69] worker 21 listening for tasks. I0327 15:22:25.769790 1 tasks_processing.go:69] worker 39 listening for tasks. I0327 15:22:25.769810 1 tasks_processing.go:69] worker 13 listening for tasks. I0327 15:22:25.769806 1 tasks_processing.go:69] worker 11 listening for tasks. I0327 15:22:25.769819 1 tasks_processing.go:69] worker 59 listening for tasks. I0327 15:22:25.769805 1 tasks_processing.go:69] worker 40 listening for tasks. I0327 15:22:25.769810 1 tasks_processing.go:69] worker 41 listening for tasks. I0327 15:22:25.769822 1 tasks_processing.go:69] worker 63 listening for tasks. I0327 15:22:25.769832 1 tasks_processing.go:69] worker 24 listening for tasks. I0327 15:22:25.769832 1 tasks_processing.go:69] worker 60 listening for tasks. I0327 15:22:25.769835 1 tasks_processing.go:69] worker 57 listening for tasks. I0327 15:22:25.769815 1 tasks_processing.go:69] worker 42 listening for tasks. I0327 15:22:25.769842 1 tasks_processing.go:69] worker 51 listening for tasks. I0327 15:22:25.769845 1 tasks_processing.go:69] worker 26 listening for tasks. I0327 15:22:25.769845 1 tasks_processing.go:69] worker 46 listening for tasks. I0327 15:22:25.769847 1 tasks_processing.go:69] worker 47 listening for tasks. I0327 15:22:25.769849 1 tasks_processing.go:69] worker 48 listening for tasks. I0327 15:22:25.769853 1 tasks_processing.go:71] worker 15 working on silenced_alerts task. I0327 15:22:25.769855 1 tasks_processing.go:69] worker 49 listening for tasks. I0327 15:22:25.769857 1 tasks_processing.go:69] worker 53 listening for tasks. I0327 15:22:25.769775 1 tasks_processing.go:69] worker 14 listening for tasks. I0327 15:22:25.769856 1 tasks_processing.go:69] worker 58 listening for tasks. I0327 15:22:25.769864 1 tasks_processing.go:69] worker 31 listening for tasks. I0327 15:22:25.769864 1 tasks_processing.go:69] worker 34 listening for tasks. I0327 15:22:25.769872 1 tasks_processing.go:69] worker 37 listening for tasks. I0327 15:22:25.769872 1 tasks_processing.go:69] worker 29 listening for tasks. I0327 15:22:25.769851 1 tasks_processing.go:69] worker 52 listening for tasks. I0327 15:22:25.769882 1 tasks_processing.go:71] worker 31 working on oauths task. I0327 15:22:25.769884 1 tasks_processing.go:71] worker 57 working on machine_config_pools task. I0327 15:22:25.769887 1 tasks_processing.go:71] worker 39 working on infrastructures task. I0327 15:22:25.769886 1 tasks_processing.go:71] worker 13 working on jaegers task. I0327 15:22:25.769879 1 tasks_processing.go:69] worker 32 listening for tasks. I0327 15:22:25.769893 1 tasks_processing.go:71] worker 10 working on olm_operators task. I0327 15:22:25.769896 1 tasks_processing.go:69] worker 38 listening for tasks. I0327 15:22:25.769898 1 tasks_processing.go:71] worker 59 working on sap_pods task. I0327 15:22:25.769900 1 tasks_processing.go:71] worker 12 working on active_alerts task. I0327 15:22:25.769902 1 tasks_processing.go:71] worker 51 working on cost_management_metrics_configs task. I0327 15:22:25.769905 1 tasks_processing.go:71] worker 38 working on openstack_dataplanenodesets task. I0327 15:22:25.769910 1 tasks_processing.go:71] worker 48 working on mutating_webhook_configurations task. W0327 15:22:25.769937 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:25.769951 1 tasks_processing.go:71] worker 49 working on overlapping_namespace_uids task. I0327 15:22:25.769953 1 tasks_processing.go:71] worker 12 working on container_images task. I0327 15:22:25.769966 1 tasks_processing.go:71] worker 40 working on node_logs task. I0327 15:22:25.769982 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 43.09µs to process 0 records I0327 15:22:25.769997 1 tasks_processing.go:71] worker 41 working on metrics task. W0327 15:22:25.770022 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:25.770039 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 33.361µs to process 0 records I0327 15:22:25.770048 1 tasks_processing.go:71] worker 47 working on machine_sets task. I0327 15:22:25.770088 1 tasks_processing.go:71] worker 63 working on monitoring_persistent_volumes task. I0327 15:22:25.770098 1 tasks_processing.go:71] worker 41 working on config_maps task. I0327 15:22:25.769897 1 tasks_processing.go:71] worker 42 working on image task. I0327 15:22:25.770138 1 tasks_processing.go:71] worker 53 working on machine_configs task. I0327 15:22:25.769854 1 tasks_processing.go:69] worker 27 listening for tasks. I0327 15:22:25.770235 1 tasks_processing.go:71] worker 27 working on machines task. I0327 15:22:25.769816 1 tasks_processing.go:69] worker 22 listening for tasks. I0327 15:22:25.769820 1 tasks_processing.go:69] worker 43 listening for tasks. I0327 15:22:25.769822 1 tasks_processing.go:69] worker 23 listening for tasks. I0327 15:22:25.769826 1 tasks_processing.go:69] worker 44 listening for tasks. I0327 15:22:25.769828 1 tasks_processing.go:69] worker 56 listening for tasks. I0327 15:22:25.769800 1 tasks_processing.go:69] worker 55 listening for tasks. I0327 15:22:25.769834 1 tasks_processing.go:69] worker 45 listening for tasks. I0327 15:22:25.769838 1 tasks_processing.go:69] worker 50 listening for tasks. I0327 15:22:25.769839 1 tasks_processing.go:69] worker 25 listening for tasks. I0327 15:22:25.769832 1 tasks_processing.go:69] worker 61 listening for tasks. I0327 15:22:25.769839 1 tasks_processing.go:69] worker 62 listening for tasks. I0327 15:22:25.769843 1 tasks_processing.go:71] worker 8 working on nodenetworkconfigurationpolicies task. I0327 15:22:25.769961 1 tasks_processing.go:69] worker 54 listening for tasks. I0327 15:22:25.770437 1 tasks_processing.go:71] worker 62 working on openshift_machine_api_events task. I0327 15:22:25.769844 1 tasks_processing.go:71] worker 1 working on openstack_version task. I0327 15:22:25.769899 1 tasks_processing.go:71] worker 32 working on ceph_cluster task. I0327 15:22:25.770463 1 tasks_processing.go:71] worker 56 working on install_plans task. I0327 15:22:25.770472 1 tasks_processing.go:71] worker 22 working on pdbs task. I0327 15:22:25.770478 1 tasks_processing.go:71] worker 45 working on feature_gates task. I0327 15:22:25.770482 1 tasks_processing.go:71] worker 23 working on proxies task. I0327 15:22:25.770524 1 tasks_processing.go:71] worker 44 working on image_pruners task. I0327 15:22:25.770466 1 tasks_processing.go:71] worker 43 working on nodenetworkstates task. I0327 15:22:25.769851 1 tasks_processing.go:71] worker 4 working on machine_healthchecks task. I0327 15:22:25.770473 1 tasks_processing.go:71] worker 55 working on clusterroles task. I0327 15:22:25.769847 1 tasks_processing.go:71] worker 16 working on storage_classes task. I0327 15:22:25.769853 1 tasks_processing.go:71] worker 20 working on cluster_apiserver task. I0327 15:22:25.769863 1 tasks_processing.go:69] worker 36 listening for tasks. I0327 15:22:25.769857 1 tasks_processing.go:69] worker 33 listening for tasks. I0327 15:22:25.769864 1 tasks_processing.go:71] worker 7 working on validating_webhook_configurations task. I0327 15:22:25.770724 1 tasks_processing.go:71] worker 33 working on aggregated_monitoring_cr_names task. I0327 15:22:25.770438 1 tasks_processing.go:71] worker 54 working on lokistack task. I0327 15:22:25.769868 1 tasks_processing.go:71] worker 14 working on container_runtime_configs task. I0327 15:22:25.769870 1 tasks_processing.go:71] worker 58 working on openstack_controlplanes task. I0327 15:22:25.769869 1 tasks_processing.go:69] worker 35 listening for tasks. I0327 15:22:25.770764 1 tasks_processing.go:71] worker 35 working on openshift_logging task. I0327 15:22:25.769869 1 tasks_processing.go:71] worker 5 working on operators_pods_and_events task. I0327 15:22:25.769871 1 tasks_processing.go:69] worker 28 listening for tasks. I0327 15:22:25.769874 1 tasks_processing.go:71] worker 17 working on operators task. I0327 15:22:25.770792 1 tasks_processing.go:71] worker 28 working on openstack_dataplanedeployments task. I0327 15:22:25.769877 1 tasks_processing.go:71] worker 34 working on sap_datahubs task. I0327 15:22:25.769878 1 tasks_processing.go:71] worker 6 working on sap_config task. I0327 15:22:25.769879 1 tasks_processing.go:71] worker 37 working on qemu_kubevirt_launcher_logs task. I0327 15:22:25.769877 1 tasks_processing.go:71] worker 60 working on storage_cluster task. I0327 15:22:25.769879 1 tasks_processing.go:71] worker 11 working on schedulers task. I0327 15:22:25.770713 1 tasks_processing.go:71] worker 36 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. W0327 15:22:25.769876 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:25.771948 1 tasks_processing.go:74] worker 15 stopped. I0327 15:22:25.771973 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 2.084583ms to process 0 records I0327 15:22:25.769884 1 tasks_processing.go:71] worker 19 working on certificate_signing_requests task. I0327 15:22:25.769885 1 tasks_processing.go:71] worker 29 working on crds task. I0327 15:22:25.769887 1 tasks_processing.go:71] worker 52 working on support_secret task. I0327 15:22:25.769888 1 tasks_processing.go:71] worker 9 working on networks task. I0327 15:22:25.769893 1 tasks_processing.go:71] worker 46 working on authentication task. I0327 15:22:25.769880 1 tasks_processing.go:71] worker 24 working on machine_autoscalers task. I0327 15:22:25.769889 1 tasks_processing.go:71] worker 21 working on pod_network_connectivity_checks task. I0327 15:22:25.769904 1 tasks_processing.go:71] worker 26 working on service_accounts task. I0327 15:22:25.769848 1 tasks_processing.go:71] worker 0 working on ingress_certificates task. I0327 15:22:25.770467 1 tasks_processing.go:71] worker 50 working on image_registries task. I0327 15:22:25.770540 1 tasks_processing.go:71] worker 25 working on ingress task. I0327 15:22:25.769864 1 tasks_processing.go:69] worker 30 listening for tasks. I0327 15:22:25.773339 1 tasks_processing.go:74] worker 30 stopped. I0327 15:22:25.769873 1 tasks_processing.go:71] worker 18 working on dvo_metrics task. I0327 15:22:25.773060 1 tasks_processing.go:71] worker 61 working on tsdb_status task. W0327 15:22:25.773490 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:25.773503 1 tasks_processing.go:74] worker 61 stopped. I0327 15:22:25.773515 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 30.914µs to process 0 records I0327 15:22:25.773680 1 tasks_processing.go:74] worker 13 stopped. I0327 15:22:25.773693 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 3.779476ms to process 0 records I0327 15:22:25.774285 1 tasks_processing.go:74] worker 38 stopped. I0327 15:22:25.774301 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 4.369191ms to process 0 records I0327 15:22:25.777052 1 tasks_processing.go:74] worker 51 stopped. I0327 15:22:25.777070 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 7.137294ms to process 0 records I0327 15:22:25.777080 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 6.99225ms to process 0 records I0327 15:22:25.777088 1 tasks_processing.go:74] worker 47 stopped. I0327 15:22:25.777376 1 tasks_processing.go:74] worker 10 stopped. I0327 15:22:25.777390 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 7.466585ms to process 0 records I0327 15:22:25.777397 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0327 15:22:25.777415 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0327 15:22:25.777425 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0327 15:22:25.777429 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0327 15:22:25.777447 1 controller.go:489] The operator is still being initialized I0327 15:22:25.777457 1 controller.go:512] The operator is healthy I0327 15:22:25.777484 1 tasks_processing.go:74] worker 39 stopped. I0327 15:22:25.778102 1 recorder.go:75] Recording config/infrastructure with fingerprint=104e2a5c6af7d7fcfc3f737c1523416052ed32adbf036ff5a7df97a4370583e1 I0327 15:22:25.778118 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 7.581599ms to process 1 records I0327 15:22:25.778223 1 tasks_processing.go:74] worker 31 stopped. I0327 15:22:25.778329 1 recorder.go:75] Recording config/oauth with fingerprint=ea49a9fca3a950118f336be32b5c6fc4b2bf3ad1a22c175f82f1cdc616a8daf5 I0327 15:22:25.778338 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 7.673817ms to process 1 records I0327 15:22:25.780309 1 tasks_processing.go:74] worker 57 stopped. I0327 15:22:25.780325 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 10.41593ms to process 0 records I0327 15:22:25.781509 1 tasks_processing.go:74] worker 48 stopped. I0327 15:22:25.781787 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=b82398e7ff36f4a48afeb038a6c16b32a885cd13b3b7ef9faa69e4373ce136ee I0327 15:22:25.781834 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=a7e94c9296c92389ed95b757e7ab38dcf38dc1ac0234307e3d20cc2f1af93dc8 I0327 15:22:25.781873 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=ce2209fc23662c75cd3d193ac55456d6f89ad3c01227975e635b72a0b9d7cb67 I0327 15:22:25.781885 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 11.582943ms to process 3 records I0327 15:22:25.781967 1 tasks_processing.go:74] worker 42 stopped. I0327 15:22:25.781986 1 recorder.go:75] Recording config/image with fingerprint=601782564cccc42bc29d818dd0d6938742ef6fcab7a3aecfdce1231e7daae5b2 I0327 15:22:25.781995 1 gather.go:177] gatherer "clusterconfig" function "image" took 11.410288ms to process 1 records I0327 15:22:25.783183 1 tasks_processing.go:74] worker 6 stopped. I0327 15:22:25.783195 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 12.224307ms to process 0 records I0327 15:22:25.783202 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 12.730031ms to process 0 records I0327 15:22:25.783229 1 tasks_processing.go:74] worker 1 stopped. I0327 15:22:25.783308 1 tasks_processing.go:74] worker 4 stopped. E0327 15:22:25.783320 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0327 15:22:25.783329 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 12.665224ms to process 0 records I0327 15:22:25.785006 1 tasks_processing.go:74] worker 35 stopped. I0327 15:22:25.785016 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 14.237021ms to process 0 records I0327 15:22:25.788248 1 tasks_processing.go:74] worker 58 stopped. I0327 15:22:25.788262 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 17.486633ms to process 0 records I0327 15:22:25.788597 1 tasks_processing.go:74] worker 21 stopped. E0327 15:22:25.788611 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0327 15:22:25.788622 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 15.823302ms to process 0 records I0327 15:22:25.789181 1 tasks_processing.go:74] worker 49 stopped. I0327 15:22:25.789228 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0327 15:22:25.789242 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 19.216032ms to process 1 records I0327 15:22:25.792254 1 tasks_processing.go:74] worker 28 stopped. I0327 15:22:25.792269 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 21.454386ms to process 0 records I0327 15:22:25.792280 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 21.417344ms to process 0 records I0327 15:22:25.792286 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 21.823651ms to process 0 records E0327 15:22:25.792293 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0327 15:22:25.792297 1 tasks_processing.go:74] worker 34 stopped. I0327 15:22:25.792302 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 20.055456ms to process 0 records I0327 15:22:25.792309 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 20.747351ms to process 0 records I0327 15:22:25.792315 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 21.691203ms to process 0 records I0327 15:22:25.792316 1 tasks_processing.go:74] worker 32 stopped. I0327 15:22:25.792322 1 tasks_processing.go:74] worker 43 stopped. I0327 15:22:25.792326 1 tasks_processing.go:74] worker 60 stopped. I0327 15:22:25.792330 1 tasks_processing.go:74] worker 27 stopped. I0327 15:22:25.792337 1 tasks_processing.go:74] worker 52 stopped. E0327 15:22:25.792338 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0327 15:22:25.792352 1 gather.go:177] gatherer "clusterconfig" function "machines" took 22.086367ms to process 0 records I0327 15:22:25.792518 1 tasks_processing.go:74] worker 8 stopped. I0327 15:22:25.792529 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 22.226895ms to process 0 records I0327 15:22:25.792552 1 tasks_processing.go:74] worker 59 stopped. I0327 15:22:25.792564 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 22.645591ms to process 0 records I0327 15:22:25.792642 1 tasks_processing.go:74] worker 25 stopped. I0327 15:22:25.792721 1 recorder.go:75] Recording config/ingress with fingerprint=6ce4ca4ea5c0aea3564ab548cd112d818dd05bffa77199cc3a1eec2486bf1975 I0327 15:22:25.792739 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 19.461269ms to process 1 records I0327 15:22:25.792836 1 tasks_processing.go:74] worker 46 stopped. I0327 15:22:25.792963 1 recorder.go:75] Recording config/authentication with fingerprint=f7d8c31d7e7a900f7fb0e492f26dda914625028b62fcdfb3244b1d7b2d410a49 I0327 15:22:25.792980 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 20.065594ms to process 1 records I0327 15:22:25.792986 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 22.6211ms to process 0 records I0327 15:22:25.792992 1 tasks_processing.go:74] worker 40 stopped. I0327 15:22:25.793004 1 tasks_processing.go:74] worker 62 stopped. I0327 15:22:25.793021 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 22.556203ms to process 0 records I0327 15:22:25.793199 1 tasks_processing.go:74] worker 20 stopped. I0327 15:22:25.793321 1 recorder.go:75] Recording config/apiserver with fingerprint=b9c4ac18a2c371a868fa77fb142744d360379346918604857dac16efab4c078b I0327 15:22:25.793331 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 22.515086ms to process 1 records I0327 15:22:25.793340 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 22.47565ms to process 0 records I0327 15:22:25.793343 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 20.598728ms to process 0 records I0327 15:22:25.793348 1 tasks_processing.go:74] worker 24 stopped. I0327 15:22:25.793353 1 tasks_processing.go:74] worker 54 stopped. I0327 15:22:25.793576 1 tasks_processing.go:74] worker 44 stopped. I0327 15:22:25.793769 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=d9c5ae1073a85c2589130e84ed513ce4bfac0a59d5ecd4bc804a4a1a00cd4e23 I0327 15:22:25.793782 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 22.975569ms to process 1 records I0327 15:22:25.795289 1 tasks_processing.go:74] worker 14 stopped. I0327 15:22:25.795306 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 24.530363ms to process 0 records I0327 15:22:25.795358 1 tasks_processing.go:74] worker 63 stopped. I0327 15:22:25.795370 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 25.256278ms to process 0 records I0327 15:22:25.795459 1 tasks_processing.go:74] worker 22 stopped. I0327 15:22:25.795550 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=7f28254052c7906dc255f0b4c3dcfed3fe35319b52da8d048dc52731efc4b4e2 I0327 15:22:25.795572 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=595dc52a21fbe7e1fc56705f43b3c21ba2cbc997c617e0ff6aba14bcc54aef20 I0327 15:22:25.795588 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=0c4eed9b91a4dfa0101f0e888511ed5020db73e99c7172a85964bee889b55ebe I0327 15:22:25.795594 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 24.977929ms to process 3 records I0327 15:22:25.796024 1 tasks_processing.go:74] worker 23 stopped. I0327 15:22:25.796681 1 recorder.go:75] Recording config/proxy with fingerprint=c9be2f0ed4ea58faa7da150392140fd2cdc77e29654c9f6dc8a7476e90748509 I0327 15:22:25.796752 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 25.525956ms to process 1 records I0327 15:22:25.797263 1 tasks_processing.go:74] worker 50 stopped. I0327 15:22:25.798761 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=4b7b387adcfe1869a63e734330f7298ac21a89125ea963c5f125df1afe3c3d80 I0327 15:22:25.798782 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 23.675886ms to process 1 records I0327 15:22:25.800700 1 gather_logs.go:145] no pods in namespace were found I0327 15:22:25.800719 1 tasks_processing.go:74] worker 37 stopped. I0327 15:22:25.800728 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 29.442033ms to process 0 records I0327 15:22:25.801244 1 tasks_processing.go:74] worker 16 stopped. I0327 15:22:25.801347 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=64535c12fb1e67247a096c71ab1dfc422ca696edd5e4dba7e12ad2522df582dc I0327 15:22:25.801393 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=94e67822f10b291417378df927a73c95e124a82c6d6d378b58807e23edb23679 I0327 15:22:25.801402 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 30.592113ms to process 2 records I0327 15:22:25.801416 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0327 15:22:25.801421 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 31.11313ms to process 1 records I0327 15:22:25.801427 1 tasks_processing.go:74] worker 53 stopped. I0327 15:22:25.801447 1 tasks_processing.go:74] worker 2 stopped. I0327 15:22:25.801835 1 recorder.go:75] Recording config/node/ip-10-0-0-108.ec2.internal with fingerprint=bf7a607ce21a9af98f9b578f7264a357ea8ada462671d1541a65b399847dc1f1 I0327 15:22:25.801935 1 recorder.go:75] Recording config/node/ip-10-0-1-101.ec2.internal with fingerprint=469173769c4f285d7c733556021f8570e16c1d56c274ba3b6c842599b1a8d229 I0327 15:22:25.801998 1 recorder.go:75] Recording config/node/ip-10-0-2-240.ec2.internal with fingerprint=8573a2ec2a97db2334c0199e8b0d01d6084c20a52a82b3d1d9583d91b36e9954 I0327 15:22:25.802007 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 31.657794ms to process 3 records I0327 15:22:25.802100 1 tasks_processing.go:74] worker 9 stopped. I0327 15:22:25.802158 1 recorder.go:75] Recording config/network with fingerprint=e780ee1cef7e78d75fb20fc57b238da6296ce62220f69cde728d447513879d54 I0327 15:22:25.802169 1 gather.go:177] gatherer "clusterconfig" function "networks" took 29.372524ms to process 1 records I0327 15:22:25.802225 1 tasks_processing.go:74] worker 7 stopped. I0327 15:22:25.802322 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=2952f216a942f608b37a713c7445ba2d314c228a7c639c7f286e8331e18f50fa I0327 15:22:25.802422 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=e22b91e4c57486deb21b01a538174556c7c6cf84003f19e4b43066c04080a7db W0327 15:22:25.802436 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0327 15:22:25.802446 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=72b4535e9a583e01f493e3b10fc4917dc1540c1b907e7499f7e6bc416f81b014 I0327 15:22:25.802484 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=7e6aad200d5b2ba81c81d258a0eacd357f391e1e7f7e071fece35eb1b8b1f7a1 I0327 15:22:25.802523 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=a032533479597327b29353fbf3644662614871153f046c8d43e78119abf305ca I0327 15:22:25.802561 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=19fd144fae9ae7a3e99fb1b9d7d2ee4cffa42ff42a41e656efd74311f3d50694 I0327 15:22:25.802610 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=08ce6285f6e51857bb014e13f0568c048a4aec7acab79f6882506e39fe6cc1f7 I0327 15:22:25.802660 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=286967dfb6617c8ea2a2bf20e0a78c4797867a231da6bbc3f73770a1adfa9077 I0327 15:22:25.802692 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=1f357e3be24befa40f1a9c3024ff4f5dd6d67f20d84e387f8b0bf7e462ca2d1e I0327 15:22:25.802727 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=99850fa0fbdaa46f411265f3af2bb7bcc4a60e4ccbd8df7d3c93348eaa210e35 I0327 15:22:25.802753 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=35023aa65e313bc7ca7bdf00842982b1338cf75121bb447368ad032c4902d7d7 I0327 15:22:25.802761 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 31.475084ms to process 11 records I0327 15:22:25.802841 1 tasks_processing.go:74] worker 11 stopped. I0327 15:22:25.802943 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=823989762010f04fd06b0a7cd2eca39a9333ec9804208f2592da9a47e44f21cf I0327 15:22:25.802960 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 31.075687ms to process 1 records I0327 15:22:25.803313 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0327 15:22:25.803318 1 tasks_processing.go:74] worker 45 stopped. I0327 15:22:25.803381 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0327 15:22:25.803457 1 operator.go:288] started I0327 15:22:25.803477 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0327 15:22:25.803478 1 recorder.go:75] Recording config/featuregate with fingerprint=8c91b5997b1e190c3103c90430e52626659789ec8ad84a6c3cd808261e3a490e I0327 15:22:25.803492 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 32.829388ms to process 1 records I0327 15:22:25.805361 1 tasks_processing.go:74] worker 19 stopped. I0327 15:22:25.805498 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 33.363401ms to process 0 records I0327 15:22:25.809712 1 tasks_processing.go:74] worker 33 stopped. I0327 15:22:25.809727 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 38.974782ms to process 0 records I0327 15:22:25.811165 1 tasks_processing.go:74] worker 55 stopped. I0327 15:22:25.811459 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=28053d517719903c30a08dccfbeb59465f1c6eacf5a99e6722e2654caca42d4c I0327 15:22:25.811614 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=b4fc52c6d641e815542caf8fce8a3842e53ee63235961a226338892630a94774 I0327 15:22:25.811626 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 40.513654ms to process 2 records I0327 15:22:25.814115 1 tasks_processing.go:74] worker 12 stopped. I0327 15:22:25.816141 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-575cd97545-qfxw6 with fingerprint=56818952fcde0bc87f425f10fee8b211aa03d8197442a9708b5443f3db2a38f6 I0327 15:22:25.816344 1 recorder.go:75] Recording config/pod/openshift-multus/multus-kxfc4 with fingerprint=6badbea51a8654b6dbbe3282ec0b959c50d5240f67c754a3d5b6e3c22d7bf4ea I0327 15:22:25.816507 1 recorder.go:75] Recording config/pod/openshift-multus/multus-n49jb with fingerprint=3fa9270e1043bf49a75c2022210c653e71911ebcf8331103fb4a4d2e54276046 I0327 15:22:25.816667 1 recorder.go:75] Recording config/pod/openshift-multus/multus-s5dwn with fingerprint=e89143a66217bd623d69173c314ef14129b3260a9b5f5c6e48973d47e56b50c5 I0327 15:22:25.817088 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp with fingerprint=f363328c3f65eff37a73fc36f444c1f353396190b39cf5f5265d5d93b09ec8a8 I0327 15:22:25.817511 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2 with fingerprint=c40a37ec6dd77beecd3b3229fddb55ee8763e938ee22f4080854f6341d35c763 I0327 15:22:25.817593 1 recorder.go:75] Recording config/running_containers with fingerprint=b4529ee3450d0f228c592f6edbe1e524cac04834b4906a67988eea1a26e1992d I0327 15:22:25.817605 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 44.144263ms to process 7 records I0327 15:22:25.825334 1 controller.go:212] Source scaController *sca.Controller is not ready I0327 15:22:25.825351 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0327 15:22:25.825359 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0327 15:22:25.825364 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0327 15:22:25.825369 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0327 15:22:25.825395 1 controller.go:489] The operator is still being initialized I0327 15:22:25.825403 1 controller.go:512] The operator is healthy I0327 15:22:25.825415 1 tasks_processing.go:74] worker 29 stopped. I0327 15:22:25.825441 1 prometheus_rules.go:88] Prometheus rules successfully created I0327 15:22:25.826748 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=44bfb2dd401f887b2bfd0eebeb1c21343621cfb6a9b83228de5d03e94c72692d I0327 15:22:25.827414 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=e5bb0f0b45c4e3e6a1e4ac437345203c0ca86912551d8a881c16a5bc947ea599 I0327 15:22:25.827434 1 gather.go:177] gatherer "clusterconfig" function "crds" took 53.302841ms to process 2 records E0327 15:22:25.836431 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%270269c185-6f7c-4938-b7b6-c320b1902a79%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.15:48556->172.30.0.10:53: read: connection refused I0327 15:22:25.836448 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%270269c185-6f7c-4938-b7b6-c320b1902a79%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.15:48556->172.30.0.10:53: read: connection refused I0327 15:22:25.840505 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0327 15:22:25.840515 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0327 15:22:25.840527 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0327 15:22:25.841395 1 base_controller.go:82] Caches are synced for ConfigController I0327 15:22:25.841406 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0327 15:22:25.846249 1 configmapobserver.go:84] configmaps "insights-config" not found I0327 15:22:25.848378 1 tasks_processing.go:74] worker 41 stopped. E0327 15:22:25.848395 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0327 15:22:25.848404 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0327 15:22:25.848408 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0327 15:22:25.848421 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0327 15:22:25.848454 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0327 15:22:25.848461 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0327 15:22:25.848467 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0327 15:22:25.848471 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0327 15:22:25.848508 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0327 15:22:25.848515 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0327 15:22:25.848521 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 78.26403ms to process 7 records I0327 15:22:25.893167 1 tasks_processing.go:74] worker 3 stopped. I0327 15:22:25.893604 1 recorder.go:75] Recording config/version with fingerprint=4b37a99193e91df108571de92d7e7f6b23ada473c8ffc5148c35e39e288e199c I0327 15:22:25.893623 1 recorder.go:75] Recording config/id with fingerprint=1eca5076152f9d8dc3c8524e009ef22d94dcafb6e9da4726d7d7890cd2704e1f I0327 15:22:25.893633 1 gather.go:177] gatherer "clusterconfig" function "version" took 123.371664ms to process 2 records I0327 15:22:25.900783 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload I0327 15:22:25.903577 1 base_controller.go:82] Caches are synced for LoggingSyncer I0327 15:22:25.903597 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... W0327 15:22:25.905661 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.15:46567->172.30.0.10:53: read: connection refused I0327 15:22:25.905677 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.15:46567->172.30.0.10:53: read: connection refused I0327 15:22:25.907644 1 tasks_processing.go:74] worker 36 stopped. I0327 15:22:25.907659 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 135.802951ms to process 0 records I0327 15:22:25.923760 1 tasks_processing.go:74] worker 0 stopped. E0327 15:22:25.923779 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0327 15:22:25.923785 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2pa4o9p55519n7vtsgu7lekoc3hpuo6c-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2pa4o9p55519n7vtsgu7lekoc3hpuo6c-primary-cert-bundle-secret" not found I0327 15:22:25.923869 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=a2de961a0b11263dd82a7ef6789c86962ac1d3437afce59e8de702a94ecfa75f I0327 15:22:25.923905 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 150.791906ms to process 1 records W0327 15:22:26.799104 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0327 15:22:27.227064 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0327 15:22:27.634952 1 tasks_processing.go:74] worker 17 stopped. I0327 15:22:27.635007 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=4874833766e0ed1ffff43de7c0539f7f4ddbd0d28312ff89b09322048c19fecc I0327 15:22:27.635042 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=3515b3c8b11c46c239ac6ed6600de01fbb582d3e802d3e686167ca65f37fe3d0 I0327 15:22:27.635070 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0327 15:22:27.635104 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=3d4fb5b5b5e3fcf67d099dd92da0ff8e839d686b641949c0ed411f48d7e9da0d I0327 15:22:27.635124 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0327 15:22:27.635144 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=68be717ecb11f5eabd32c3da01baa13e027d7afc28118a157d39e8105bbce75a I0327 15:22:27.635181 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=86fe1242a89a279a723b22544e5f6d0ff9b3d440507e5de9f58a344789e2b8c3 I0327 15:22:27.635226 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=28907585ce46554f2f3adfa151c84a38db88e85067fb08e6cc153bfaeff0d79b I0327 15:22:27.635245 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=e31ce1fc4ec33bc148698a69cdeeafdf85313b60739fb93dc1568be2d922fc41 I0327 15:22:27.635262 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=f3a38e611671d36b1711b18e6e2c01a32ab2452b6a40153c7344e96b9ea6ad3a I0327 15:22:27.635272 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0327 15:22:27.635288 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=18bf237231614d0bfe3386a0acbfd0e0aa37fc558ff3f4b556f1e4db69cbb92c I0327 15:22:27.635300 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0327 15:22:27.635320 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=0931c8dca26beb22a78faad86e76767c776603d412ef08231bdc3d6cb30a92ed I0327 15:22:27.635330 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0327 15:22:27.635344 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=da5bf523855d7e87c013388eba66e2f8c4ce3455b800cb598386e33b3ee13292 I0327 15:22:27.635353 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0327 15:22:27.635369 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=21434c3e6b7f6d29e7a9b5c4504a797998703877549624740942c3a79d0da636 I0327 15:22:27.635511 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=7e95595953b173a2bf2afa37dbd437fe607e18759febdfcb384412a5f35cc11a I0327 15:22:27.635521 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0327 15:22:27.635531 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0327 15:22:27.635556 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0327 15:22:27.635579 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=70e6655c4e2d54e0992dc1f88ae6e9eb87bdd54ff09cd9eff202747fa779c3c9 I0327 15:22:27.635602 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=3665eb04b2d9936a179e166a5d29989da9e52cea5fad45c7af62ed787b43c6a9 I0327 15:22:27.635615 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0327 15:22:27.635632 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=974d80fc1d48de5a601a76bf088b9c8d5b35b2d870a6b7d42d59684226aa18e9 I0327 15:22:27.635642 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0327 15:22:27.635655 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=bd255049a691ff152ba2961503f7b2a6ac085994678cbd33fdd105e8823f88be I0327 15:22:27.635670 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=c64c5448f9315a98a0070f6de3942f511ff45044763a521f4b80b6543dd146c5 I0327 15:22:27.635689 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=7d5c9c70df0af0dff7392987b9c6164574b61949fd5b9748a415332099910444 I0327 15:22:27.635708 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=992af4f5a47b8bbc71332c678667f7746e478247e77774cf58001ebedac002c8 I0327 15:22:27.635721 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=a964094a1119e07b9c6e9ec1db05cc31108bab53c09b2b923a3a8049473aabf6 I0327 15:22:27.635749 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=49ede1575cadf4b849e2d7062051288dde3e87df03789e9eb4ea4aab335cd157 I0327 15:22:27.635766 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0327 15:22:27.635779 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0327 15:22:27.635786 1 gather.go:177] gatherer "clusterconfig" function "operators" took 1.864144935s to process 35 records W0327 15:22:27.799375 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0327 15:22:28.799983 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0327 15:22:29.411957 1 gather_cluster_operator_pods_and_events.go:121] Found 35 pods with 78 containers I0327 15:22:29.412257 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 322638 bytes I0327 15:22:29.412318 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-ch926 pod in namespace openshift-dns (previous: false). I0327 15:22:29.642739 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-ch926 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-ch926\" is waiting to start: ContainerCreating" I0327 15:22:29.642759 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-ch926\" is waiting to start: ContainerCreating" I0327 15:22:29.642769 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-ch926 pod in namespace openshift-dns (previous: false). W0327 15:22:29.799764 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0327 15:22:29.817393 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-ch926 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-ch926\" is waiting to start: ContainerCreating" I0327 15:22:29.817412 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-ch926\" is waiting to start: ContainerCreating" I0327 15:22:29.817422 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-7988b pod in namespace openshift-dns (previous: false). I0327 15:22:30.017652 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:30.017671 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-9tmtv pod in namespace openshift-dns (previous: false). I0327 15:22:30.237604 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:30.237625 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-sgxrd pod in namespace openshift-dns (previous: false). I0327 15:22:30.492353 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:30.492442 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-54cbd9bddf-98svq pod in namespace openshift-image-registry (previous: false). I0327 15:22:30.616280 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:30.616354 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-54cbd9bddf-xhdpz pod in namespace openshift-image-registry (previous: false). W0327 15:22:30.799473 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0327 15:22:30.799499 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0327 15:22:30.799515 1 tasks_processing.go:74] worker 18 stopped. E0327 15:22:30.799526 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0327 15:22:30.799537 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0327 15:22:30.799550 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0327 15:22:30.799563 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.026155561s to process 1 records I0327 15:22:30.813575 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:30.813619 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-55d4f94d9d-hjmgd pod in namespace openshift-image-registry (previous: false). I0327 15:22:31.017764 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-55d4f94d9d-hjmgd pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-55d4f94d9d-hjmgd\" is waiting to start: ContainerCreating" I0327 15:22:31.017784 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-55d4f94d9d-hjmgd\" is waiting to start: ContainerCreating" I0327 15:22:31.017798 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-5mcww pod in namespace openshift-image-registry (previous: false). I0327 15:22:31.217022 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:31.217044 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-cd7ct pod in namespace openshift-image-registry (previous: false). I0327 15:22:31.420559 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:31.420578 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-k6zq7 pod in namespace openshift-image-registry (previous: false). I0327 15:22:31.628150 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:31.628220 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-68cf55c9f9-scmb6 pod in namespace openshift-ingress (previous: false). I0327 15:22:31.823710 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-68cf55c9f9-scmb6 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-68cf55c9f9-scmb6\" is waiting to start: ContainerCreating" I0327 15:22:31.823729 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-68cf55c9f9-scmb6\" is waiting to start: ContainerCreating" I0327 15:22:31.823768 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7c466b5cc8-77gkj pod in namespace openshift-ingress (previous: false). I0327 15:22:32.012741 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:32.012783 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7c466b5cc8-kd4rr pod in namespace openshift-ingress (previous: false). I0327 15:22:32.217759 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7c466b5cc8-kd4rr pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7c466b5cc8-kd4rr\" is waiting to start: ContainerCreating" I0327 15:22:32.217780 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7c466b5cc8-kd4rr\" is waiting to start: ContainerCreating" I0327 15:22:32.217791 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-gsg8d pod in namespace openshift-ingress-canary (previous: false). I0327 15:22:32.417261 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-gsg8d pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-gsg8d\" is waiting to start: ContainerCreating" I0327 15:22:32.417280 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-gsg8d\" is waiting to start: ContainerCreating" I0327 15:22:32.417292 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:32.617873 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:32.817766 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:33.017770 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:33.218939 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:33.417734 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:33.616879 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-cccvt pod in namespace openshift-multus (previous: false). I0327 15:22:33.818669 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:33.818691 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:34.016696 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:34.229231 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:34.417759 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:34.616821 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:34.816683 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:35.016349 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-d42rr pod in namespace openshift-multus (previous: false). I0327 15:22:35.218536 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:35.218558 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for egress-router-binary-copy container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:35.417504 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for cni-plugins container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:35.619027 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for bond-cni-plugin container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:35.817607 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for routeoverride-cni container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:36.021539 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni-bincopy container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:36.217806 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for whereabouts-cni container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:36.418588 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus-additional-cni-plugins container multus-additional-cni-plugins-xsfhj pod in namespace openshift-multus (previous: false). I0327 15:22:36.618288 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0327 15:22:36.618336 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-kxfc4 pod in namespace openshift-multus (previous: true). I0327 15:22:36.817550 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-kxfc4 pod in namespace openshift-multus (previous: false). I0327 15:22:37.019846 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-n49jb pod in namespace openshift-multus (previous: true). I0327 15:22:37.217693 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-n49jb pod in namespace openshift-multus (previous: false). I0327 15:22:37.418503 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-s5dwn pod in namespace openshift-multus (previous: true). I0327 15:22:37.616422 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-multus container multus-s5dwn pod in namespace openshift-multus (previous: false). I0327 15:22:37.818056 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-cdrxt pod in namespace openshift-multus (previous: false). I0327 15:22:38.016660 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-cdrxt pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-cdrxt\" is waiting to start: ContainerCreating" I0327 15:22:38.016680 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-cdrxt\" is waiting to start: ContainerCreating" I0327 15:22:38.016688 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-cdrxt pod in namespace openshift-multus (previous: false). I0327 15:22:38.207449 1 tasks_processing.go:74] worker 56 stopped. I0327 15:22:38.207488 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0327 15:22:38.207500 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.436964889s to process 1 records I0327 15:22:38.216749 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-cdrxt pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-cdrxt\" is waiting to start: ContainerCreating" I0327 15:22:38.216762 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-cdrxt\" is waiting to start: ContainerCreating" I0327 15:22:38.216793 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-ch6cz pod in namespace openshift-multus (previous: false). I0327 15:22:38.424245 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-ch6cz pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-ch6cz\" is waiting to start: ContainerCreating" I0327 15:22:38.424266 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-ch6cz\" is waiting to start: ContainerCreating" I0327 15:22:38.424275 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-ch6cz pod in namespace openshift-multus (previous: false). I0327 15:22:38.617110 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-ch6cz pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-ch6cz\" is waiting to start: ContainerCreating" I0327 15:22:38.617132 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-ch6cz\" is waiting to start: ContainerCreating" I0327 15:22:38.617175 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-metrics-daemon container network-metrics-daemon-zgzng pod in namespace openshift-multus (previous: false). I0327 15:22:38.817259 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-zgzng pod in namespace openshift-multus for failing operator network-metrics-daemon (previous: false): "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-zgzng\" is waiting to start: ContainerCreating" I0327 15:22:38.817279 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-metrics-daemon\" in pod \"network-metrics-daemon-zgzng\" is waiting to start: ContainerCreating" I0327 15:22:38.817289 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container network-metrics-daemon-zgzng pod in namespace openshift-multus (previous: false). I0327 15:22:38.946464 1 configmapobserver.go:84] configmaps "insights-config" not found I0327 15:22:38.980248 1 tasks_processing.go:74] worker 26 stopped. I0327 15:22:38.980549 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=5f3315562ab922847b80910efb7b38b1b30885aa3cdda0895bd6e02e66f43fde I0327 15:22:38.980568 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.207354922s to process 1 records I0327 15:22:39.017758 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-metrics-daemon-zgzng pod in namespace openshift-multus for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-zgzng\" is waiting to start: ContainerCreating" I0327 15:22:39.017777 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"network-metrics-daemon-zgzng\" is waiting to start: ContainerCreating" I0327 15:22:39.017828 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:39.218585 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator ovn-controller (previous: true): "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.218607 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.218616 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:39.417931 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator ovn-acl-logging (previous: true): "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.417955 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.417968 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:39.596989 1 configmapobserver.go:84] configmaps "insights-config" not found I0327 15:22:39.617306 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-node (previous: true): "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.617327 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.617336 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:39.794455 1 configmapobserver.go:84] configmaps "insights-config" not found I0327 15:22:39.817531 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-ovn-metrics (previous: true): "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.817548 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:39.817558 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:40.017273 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator northd (previous: true): "previous terminated container \"northd\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.017443 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"northd\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.017456 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:40.216705 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator nbdb (previous: true): "previous terminated container \"nbdb\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.216724 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"nbdb\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.216735 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:40.417095 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes for failing operator sbdb (previous: true): "previous terminated container \"sbdb\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.417113 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"sbdb\" in pod \"ovnkube-node-2kmgp\" not found" I0327 15:22:40.417122 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:40.621828 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:40.848817 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:41.021893 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:41.220274 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:41.423682 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:41.619172 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:41.818925 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:42.019367 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-2kmgp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:42.221065 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:42.434790 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:42.622314 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:42.835373 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:43.023246 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:43.219002 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:43.419261 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:43.618841 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-mphjp pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:43.821332 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:44.037870 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator ovn-controller (previous: true): "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.037890 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-controller\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.037899 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:44.220262 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator ovn-acl-logging (previous: true): "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.220281 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"ovn-acl-logging\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.220293 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:44.419940 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-node (previous: true): "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.419959 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-node\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.419969 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:44.630313 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator kube-rbac-proxy-ovn-metrics (previous: true): "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.630339 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"kube-rbac-proxy-ovn-metrics\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.630359 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:44.819941 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator northd (previous: true): "previous terminated container \"northd\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.819962 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"northd\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:44.819974 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:45.020344 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator nbdb (previous: true): "previous terminated container \"nbdb\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:45.020364 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"nbdb\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:45.020375 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:45.219043 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes for failing operator sbdb (previous: true): "previous terminated container \"sbdb\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:45.219062 1 gather_cluster_operator_pods_and_events.go:280] Error: "previous terminated container \"sbdb\" in pod \"ovnkube-node-zpdq2\" not found" I0327 15:22:45.219073 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: true). I0327 15:22:45.422802 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-controller container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:45.623330 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovn-acl-logging container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:45.822353 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-node container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:46.022201 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy-ovn-metrics container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:46.223405 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for northd container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:46.421010 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for nbdb container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:46.620950 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for sbdb container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:46.820503 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for ovnkube-controller container ovnkube-node-zpdq2 pod in namespace openshift-ovn-kubernetes (previous: false). I0327 15:22:47.035354 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for check-endpoints container network-check-source-6b8cd5b79b-xrszx pod in namespace openshift-network-diagnostics (previous: false). I0327 15:22:47.218403 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-qztcs pod in namespace openshift-network-diagnostics (previous: false). I0327 15:22:47.418409 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-qztcs pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-qztcs\" is waiting to start: ContainerCreating" I0327 15:22:47.418428 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-qztcs\" is waiting to start: ContainerCreating" I0327 15:22:47.418454 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-wtpjg pod in namespace openshift-network-diagnostics (previous: false). I0327 15:22:47.618988 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-wtpjg pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-wtpjg\" is waiting to start: ContainerCreating" I0327 15:22:47.619008 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-wtpjg\" is waiting to start: ContainerCreating" I0327 15:22:47.619042 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for network-check-target-container container network-check-target-zrtfk pod in namespace openshift-network-diagnostics (previous: false). I0327 15:22:47.818701 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for network-check-target-zrtfk pod in namespace openshift-network-diagnostics for failing operator network-check-target-container (previous: false): "container \"network-check-target-container\" in pod \"network-check-target-zrtfk\" is waiting to start: ContainerCreating" I0327 15:22:47.818722 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"network-check-target-container\" in pod \"network-check-target-zrtfk\" is waiting to start: ContainerCreating" I0327 15:22:47.818756 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-9chfw pod in namespace openshift-network-console (previous: false). I0327 15:22:48.018502 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-9chfw pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-9chfw\" is waiting to start: ContainerCreating" I0327 15:22:48.018521 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-9chfw\" is waiting to start: ContainerCreating" I0327 15:22:48.018550 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for networking-console-plugin container networking-console-plugin-6ddbfdf749-b4r7z pod in namespace openshift-network-console (previous: false). I0327 15:22:48.218777 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for networking-console-plugin-6ddbfdf749-b4r7z pod in namespace openshift-network-console for failing operator networking-console-plugin (previous: false): "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-b4r7z\" is waiting to start: ContainerCreating" I0327 15:22:48.218799 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"networking-console-plugin\" in pod \"networking-console-plugin-6ddbfdf749-b4r7z\" is waiting to start: ContainerCreating" I0327 15:22:48.218809 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-4mh9z pod in namespace openshift-network-operator (previous: false). I0327 15:22:48.417612 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-jxtfb pod in namespace openshift-network-operator (previous: false). I0327 15:22:48.617949 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for iptables-alerter container iptables-alerter-wnlc4 pod in namespace openshift-network-operator (previous: false). I0327 15:22:48.819640 1 tasks_processing.go:74] worker 5 stopped. I0327 15:22:48.819744 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=1462554fd7049b81993541da5df05ede5a77f56cef7ad9b325bb0ca3cba89c84 I0327 15:22:48.819786 1 recorder.go:75] Recording events/openshift-dns with fingerprint=709d82bd35642bc1a07e63ca9d78e314dafc5c0831263ccab150412ca564f88e I0327 15:22:48.819870 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=2a649ae9bde9c746878ec9398370c87bfd365b89d5e56757a67dd496ad1255ce I0327 15:22:48.819897 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=33cb96f6260886e21974a9ca52eccf147ae75458c5e673eb9ba3f79b6a8a5bf2 I0327 15:22:48.819939 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=633342e52e6116011b71f8ba5247539ccd7e619ef898ec71778fdf0b0722d61f I0327 15:22:48.819951 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=cdd9a2f6530b5d3b9dfa9b20024d9f6c7ddde5c50b06577065b7c13cc2ec36ca I0327 15:22:48.820129 1 recorder.go:75] Recording events/openshift-multus with fingerprint=db824e1027b86d285d4555dd129c5ecf64dc8a799027083fd7880be22ca8da05 I0327 15:22:48.820279 1 recorder.go:75] Recording events/openshift-ovn-kubernetes with fingerprint=c74b73e0c60d3661bb60cebb4a08d5f00444091f5094038970856ff758dfc149 I0327 15:22:48.820320 1 recorder.go:75] Recording events/openshift-network-diagnostics with fingerprint=37d5f9b8c110b30646c59861ecb2c0b4f851d56e57ebbf65ed46f17c9ee832e2 I0327 15:22:48.820334 1 recorder.go:75] Recording events/openshift-network-node-identity with fingerprint=e46453d9ccb571f9f312317266778ae584ebf6a25842afe8ff1b60cbecfa44fe I0327 15:22:48.820353 1 recorder.go:75] Recording events/openshift-network-console with fingerprint=968c6c2dfdd4b9cf4b5df63681377c8e7bccd2316f4e2b0875fa9c37a8df8752 I0327 15:22:48.820402 1 recorder.go:75] Recording events/openshift-network-operator with fingerprint=5f03caaffebc289cdc3a4fd860b0b7f46cc94215f158cc6111d5add0e0a79504 I0327 15:22:48.820541 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-54cbd9bddf-98svq with fingerprint=56970c1cc1171e2840108b078997e81120ba96525313d0169b8c86e3d426bee6 I0327 15:22:48.820626 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-54cbd9bddf-xhdpz with fingerprint=bf56338b6cd5f5f9efa1e09fb5dc41c50a0d193bc39ea3394a55cbea0291b056 I0327 15:22:48.820722 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-55d4f94d9d-hjmgd with fingerprint=be99a08a24fe654e0bc15120ad095a535aec912275f6da95460c1d8b637e4d58 I0327 15:22:48.820821 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-68cf55c9f9-scmb6 with fingerprint=7e0bdc3100ea845763ba01dbdf00dcb7bc50fb784b67884289197f84ef782930 I0327 15:22:48.820895 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-7c466b5cc8-77gkj with fingerprint=8dee964842d0eb48125e6000a7c6c8a9a8afa86db636b85c38abd157b56d391e I0327 15:22:48.820982 1 recorder.go:75] Recording config/pod/openshift-ingress/router-default-7c466b5cc8-kd4rr with fingerprint=55be9bc940759cdc9fbda750a43f7c2c06083d2c4060b7457bdd0762658174e4 I0327 15:22:48.821006 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/egress-router-binary-copy_current.log with fingerprint=d1193d7fa1f079afe7c9d227da7d99a2e5276f89a136c56a9770c86339ce9297 I0327 15:22:48.821027 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/cni-plugins_current.log with fingerprint=e9805860bb4eab06b2aa9085cc2aa1247949a659519b763f5e68e78536d5a0b2 I0327 15:22:48.821034 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/bond-cni-plugin_current.log with fingerprint=624d7b3aaeffa9bc576afc40f9b8524cf67efe2b9aa31af86b392d5e1dfd6544 I0327 15:22:48.821038 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/routeoverride-cni_current.log with fingerprint=1a5d5bd519974d35030937669869d788ae21d298066d67f46ea4a4edbc6d4dcc I0327 15:22:48.821044 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/whereabouts-cni-bincopy_current.log with fingerprint=7599f24c1215eef1601e5b11672339c00be0a6c53d5811068301871f403662a5 I0327 15:22:48.821048 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-cccvt/whereabouts-cni_current.log with fingerprint=43f791191f6db7362fd1a70e3bc40c4da64547ae1341a29eb02abcd9ebe92943 I0327 15:22:48.821053 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/egress-router-binary-copy_current.log with fingerprint=ea84d05b2a3e43937face7bd6b565a99adaf9d2c7531fa306dcd132622ac5d5c I0327 15:22:48.821058 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/cni-plugins_current.log with fingerprint=9f66765de31b1bf8155a0e28b6c881768abdd0cf9658f563e61e64bee98e5aea I0327 15:22:48.821064 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/bond-cni-plugin_current.log with fingerprint=fb561c1065ccce8fe03c6efb078aba002448d3c1e1a6ab1366e2df1c273c89db I0327 15:22:48.821069 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/routeoverride-cni_current.log with fingerprint=b6f336fbb11a52d45c5d5f22b36f5f5288d3c18219a0f21fb4172b81afb8371e I0327 15:22:48.821073 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/whereabouts-cni-bincopy_current.log with fingerprint=345b2ac58e38b789a0c1f88b299fc437859c197e0c7ecda669a80e5286b39961 I0327 15:22:48.821077 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-d42rr/whereabouts-cni_current.log with fingerprint=7766a3d9aca9f8f62107d5f8d439d447487f7b9b9c10e67b0611fa325f3ae9c2 I0327 15:22:48.821083 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/egress-router-binary-copy_current.log with fingerprint=f074275794ccd50213aec446d3707d33d76a729c1fca546de788276c4564f3f9 I0327 15:22:48.821087 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/cni-plugins_current.log with fingerprint=c79bd698efb21adb436f4dce577799d01aba5eb493da7c1f3e2cc90255d4a2a4 I0327 15:22:48.821093 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/bond-cni-plugin_current.log with fingerprint=bfd805145dafb6f4f9537dd0ce707ba20a957a263bc9df9f1fb023f84cc7cff5 I0327 15:22:48.821098 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/routeoverride-cni_current.log with fingerprint=db400b5c2595b5ef3b302ef586059e8008b201b10f6514b9b273b59d85cfb21c I0327 15:22:48.821115 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/whereabouts-cni-bincopy_current.log with fingerprint=21a7594be910f5fcd5a88d81a0769f42f766eb946993ffa21e53171133366d90 I0327 15:22:48.821121 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-additional-cni-plugins-xsfhj/whereabouts-cni_current.log with fingerprint=287f65f23d8229a718e54e387c1c03efef7355869106f7a4b2fad68506fc951b I0327 15:22:48.821232 1 recorder.go:75] Recording config/pod/openshift-multus/multus-kxfc4 with fingerprint=6badbea51a8654b6dbbe3282ec0b959c50d5240f67c754a3d5b6e3c22d7bf4ea E0327 15:22:48.821250 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-kxfc4.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-kxfc4.json" was already recorded and had the fingerprint "6badbea51a8654b6dbbe3282ec0b959c50d5240f67c754a3d5b6e3c22d7bf4ea", overwriting with the record having fingerprint "6badbea51a8654b6dbbe3282ec0b959c50d5240f67c754a3d5b6e3c22d7bf4ea" W0327 15:22:48.821262 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-kxfc4.json" because of the warning: warning: the record with the same fingerprint "6badbea51a8654b6dbbe3282ec0b959c50d5240f67c754a3d5b6e3c22d7bf4ea" was already recorded at path "config/pod/openshift-multus/multus-kxfc4.json", recording another one with a different path "config/pod/openshift-multus/multus-kxfc4.json" I0327 15:22:48.821277 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-kxfc4/kube-multus_previous.log with fingerprint=a831d21b85598d32c83fcf1737639bd7c893b41fbc18774323df543687df8b95 I0327 15:22:48.821380 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-kxfc4/kube-multus_current.log with fingerprint=85584769ea3e74418b72118a3b619decd11fe726b75053a11a8c525ecea1b7cd I0327 15:22:48.821479 1 recorder.go:75] Recording config/pod/openshift-multus/multus-n49jb with fingerprint=3fa9270e1043bf49a75c2022210c653e71911ebcf8331103fb4a4d2e54276046 E0327 15:22:48.821492 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-n49jb.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-n49jb.json" was already recorded and had the fingerprint "3fa9270e1043bf49a75c2022210c653e71911ebcf8331103fb4a4d2e54276046", overwriting with the record having fingerprint "3fa9270e1043bf49a75c2022210c653e71911ebcf8331103fb4a4d2e54276046" W0327 15:22:48.821500 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-n49jb.json" because of the warning: warning: the record with the same fingerprint "3fa9270e1043bf49a75c2022210c653e71911ebcf8331103fb4a4d2e54276046" was already recorded at path "config/pod/openshift-multus/multus-n49jb.json", recording another one with a different path "config/pod/openshift-multus/multus-n49jb.json" I0327 15:22:48.821509 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-n49jb/kube-multus_previous.log with fingerprint=ba090ff98faa8d623e6df876f33514f5e53a67cd2a911aab5843e71445e6ea5e I0327 15:22:48.821517 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-n49jb/kube-multus_current.log with fingerprint=ba090ff98faa8d623e6df876f33514f5e53a67cd2a911aab5843e71445e6ea5e W0327 15:22:48.821524 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/logs/multus-n49jb/kube-multus_current.log" because of the warning: warning: the record with the same fingerprint "ba090ff98faa8d623e6df876f33514f5e53a67cd2a911aab5843e71445e6ea5e" was already recorded at path "config/pod/openshift-multus/logs/multus-n49jb/kube-multus_previous.log", recording another one with a different path "config/pod/openshift-multus/logs/multus-n49jb/kube-multus_current.log" I0327 15:22:48.821617 1 recorder.go:75] Recording config/pod/openshift-multus/multus-s5dwn with fingerprint=e89143a66217bd623d69173c314ef14129b3260a9b5f5c6e48973d47e56b50c5 E0327 15:22:48.821629 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-s5dwn.json" because of the error: the record with the same name "config/pod/openshift-multus/multus-s5dwn.json" was already recorded and had the fingerprint "e89143a66217bd623d69173c314ef14129b3260a9b5f5c6e48973d47e56b50c5", overwriting with the record having fingerprint "e89143a66217bd623d69173c314ef14129b3260a9b5f5c6e48973d47e56b50c5" W0327 15:22:48.821637 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/multus-s5dwn.json" because of the warning: warning: the record with the same fingerprint "e89143a66217bd623d69173c314ef14129b3260a9b5f5c6e48973d47e56b50c5" was already recorded at path "config/pod/openshift-multus/multus-s5dwn.json", recording another one with a different path "config/pod/openshift-multus/multus-s5dwn.json" I0327 15:22:48.821645 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-s5dwn/kube-multus_previous.log with fingerprint=a3b929439b440f9d67a2147160f9bf32df0de684b31572a5400552deafe9d213 I0327 15:22:48.821652 1 recorder.go:75] Recording config/pod/openshift-multus/logs/multus-s5dwn/kube-multus_current.log with fingerprint=a3b929439b440f9d67a2147160f9bf32df0de684b31572a5400552deafe9d213 W0327 15:22:48.821659 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-multus/logs/multus-s5dwn/kube-multus_current.log" because of the warning: warning: the record with the same fingerprint "a3b929439b440f9d67a2147160f9bf32df0de684b31572a5400552deafe9d213" was already recorded at path "config/pod/openshift-multus/logs/multus-s5dwn/kube-multus_previous.log", recording another one with a different path "config/pod/openshift-multus/logs/multus-s5dwn/kube-multus_current.log" I0327 15:22:48.821718 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-cdrxt with fingerprint=0a8190a5c94fdd20a41cc682b910a9006d481587fa3435986c5a3ce5f1ae2477 I0327 15:22:48.821779 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-ch6cz with fingerprint=7526421ed2ae8bcc73661d5309de3944c381eec945f9aea3a8110d9ac3f9d637 I0327 15:22:48.821866 1 recorder.go:75] Recording config/pod/openshift-multus/network-metrics-daemon-zgzng with fingerprint=88f2a0149f211af4a922a602c88682f7f4abdff32bf059d6dfeae9a070bdd305 I0327 15:22:48.822118 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp with fingerprint=f363328c3f65eff37a73fc36f444c1f353396190b39cf5f5265d5d93b09ec8a8 E0327 15:22:48.822131 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json" because of the error: the record with the same name "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json" was already recorded and had the fingerprint "f363328c3f65eff37a73fc36f444c1f353396190b39cf5f5265d5d93b09ec8a8", overwriting with the record having fingerprint "f363328c3f65eff37a73fc36f444c1f353396190b39cf5f5265d5d93b09ec8a8" W0327 15:22:48.822138 1 gather.go:155] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json" because of the warning: warning: the record with the same fingerprint "f363328c3f65eff37a73fc36f444c1f353396190b39cf5f5265d5d93b09ec8a8" was already recorded at path "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json", recording another one with a different path "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json" I0327 15:22:48.822183 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/ovnkube-controller_previous.log with fingerprint=6177c066c78d3a7f999e34cc71a4328c505dd4729fd6e8c49d40315117fbdfe6 I0327 15:22:48.822257 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/ovn-controller_current.log with fingerprint=cda5da5fb88300789dc6185a64f936632d6d1e7ce41d4007bf17aff9a1fa4150 I0327 15:22:48.822280 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/ovn-acl-logging_current.log with fingerprint=1bfa5b71cbb1239d7932fdc99e69e841dc2c9f0034ff3dd3c8f1938b0be82e0e I0327 15:22:48.822312 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/kube-rbac-proxy-node_current.log with fingerprint=2f0c725b45fd44b641c9dfc9058964e7c5336ac8328fdbed85505d96784703a6 I0327 15:22:48.822337 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=824a43caa72414c3239b3e0f63ea81cc68190cc2457d9cafe55782d5a69b1f90 I0327 15:22:48.822365 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/northd_current.log with fingerprint=299858f5884cead444ec79c8d71b9b9c2c89a78ff994fb0155389be279025faf I0327 15:22:48.822381 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/nbdb_current.log with fingerprint=0a7a17ab89bc6c59722ddc78c3f33c07b92366aad4d9aabbbb26bf568db1d5b1 I0327 15:22:48.822391 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/sbdb_current.log with fingerprint=e2d6767a8277931c390314d9bbf1fde06e210da9c0f13de3a344ce8caf01790f I0327 15:22:48.822450 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-2kmgp/ovnkube-controller_current.log with fingerprint=3c071c1691d59c6988fca07025446ddb71f5d763edbc98f45dffb5a0ec3ec1ca I0327 15:22:48.822505 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/ovn-controller_current.log with fingerprint=4d93f820b13f3bdd6941a345d6571928d1fe835d57b22465a9dd2545d28f3610 I0327 15:22:48.822527 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/ovn-acl-logging_current.log with fingerprint=3d7529c84d1aa98ebed35f3e515126c793fbbfaa351a75a107eed1d9aa44b40e I0327 15:22:48.822548 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/kube-rbac-proxy-node_current.log with fingerprint=56ee14e0ec8b017d7ecc2d0abe8273c060bf21429aed5e5f7a081545e0c61bdb I0327 15:22:48.822571 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=11d31f6ea903a4fe1b2ab719316d7da32350cd2ce1336bd9e5e5b8017a86743a I0327 15:22:48.822591 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/northd_current.log with fingerprint=f0c97ba1e212f21672b114c569b7e1b492b0b9e8e76ab4ad385233a2bc7dbb34 I0327 15:22:48.822602 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/nbdb_current.log with fingerprint=26841bb5228b4de8261de12a2dfa7976c40bae0666722c985fef48b9d999e477 I0327 15:22:48.822612 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/sbdb_current.log with fingerprint=3c7e71c41a966de00e901dba6bd225b3a9d34fe7b6b80ee6970c59e1a921e18a I0327 15:22:48.822717 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-mphjp/ovnkube-controller_current.log with fingerprint=025ee5128abe362d947899daeb76779c96cb2b83ef4e862d549711a895519bd7 I0327 15:22:48.822972 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2 with fingerprint=1f006cd18621a659e9fa52ffe71b1c642f96244ae87e62a057b7f6b25a1a2f07 E0327 15:22:48.822989 1 gather.go:161] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2.json" because of the error: the record with the same name "config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2.json" was already recorded and had the fingerprint "c40a37ec6dd77beecd3b3229fddb55ee8763e938ee22f4080854f6341d35c763", overwriting with the record having fingerprint "1f006cd18621a659e9fa52ffe71b1c642f96244ae87e62a057b7f6b25a1a2f07" I0327 15:22:48.823043 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/ovnkube-controller_previous.log with fingerprint=96fca686e7469e0e14ca291859bbea4c46320e9400e0baa36c39018644fcd407 I0327 15:22:48.823090 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/ovn-controller_current.log with fingerprint=b0583b73162b56bfa3dd019b77fe74f6831e06a26961321094839b436f80133b I0327 15:22:48.823112 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/ovn-acl-logging_current.log with fingerprint=932c7bf3ef039cff29abb2e28cfdb2e691ba6dad00903538ad8547efbd1bde13 I0327 15:22:48.823135 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/kube-rbac-proxy-node_current.log with fingerprint=51080cf53624225452e8b6351ea8b043f7b28ef14a72a5e443fff89f22202555 I0327 15:22:48.823157 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/kube-rbac-proxy-ovn-metrics_current.log with fingerprint=0592ef9da41dbbb74fc75376616598e1d9fe6c33765ff649c44de80969fa160c I0327 15:22:48.823178 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/northd_current.log with fingerprint=626b5df8300bd2c3e244613d1c8f69e5bc3d0f098a81fb2ba965007c7755c740 I0327 15:22:48.823191 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/nbdb_current.log with fingerprint=3ae187a6208e3890934ea1c6e1109ca96d9db48afc1802ea5de163ec518f8a5a I0327 15:22:48.823202 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/sbdb_current.log with fingerprint=012c549857a090f0501cd68f24e20e0af5b05354ce9ab7fc1e24534d6a3b9314 I0327 15:22:48.823278 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/logs/ovnkube-node-zpdq2/ovnkube-controller_current.log with fingerprint=9c779435affaf1f0fa0e4ffad77d596902cd1a28f7ee8cb38d1059de31332579 I0327 15:22:48.823297 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/logs/network-check-source-6b8cd5b79b-xrszx/check-endpoints_current.log with fingerprint=ba8dacdb7fd87f2994f70b83e1e40c8ccc0442f6173a7a9b40dbe43a69636d8d I0327 15:22:48.823366 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-qztcs with fingerprint=6886390ef00039954836d68125a85a1e67122cc96ea214d163d2083696f1e4b6 I0327 15:22:48.823420 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-wtpjg with fingerprint=ac900944aa462fb7b643a43d43195df2553695641f116140adb3939ecef6a3d1 I0327 15:22:48.823470 1 recorder.go:75] Recording config/pod/openshift-network-diagnostics/network-check-target-zrtfk with fingerprint=9326d6345e2fd1bbe99a7d6e7da65c6ad1f9abdecaac2ee60f13161a3c9bfa18 I0327 15:22:48.823530 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-9chfw with fingerprint=90bf41bb6142ddb9624d64357ca1d85127988e5fe682156234d39b754db2bc00 I0327 15:22:48.823585 1 recorder.go:75] Recording config/pod/openshift-network-console/networking-console-plugin-6ddbfdf749-b4r7z with fingerprint=0034297832333ab43357864fd8f996a1fd9c00f5188543718927ede21aa274bf I0327 15:22:48.823591 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-4mh9z/iptables-alerter_current.log with fingerprint=57d27781ab4b850c625622697af99772f3d827edb91570089c9e6e4efc86fb02 I0327 15:22:48.823596 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-jxtfb/iptables-alerter_current.log with fingerprint=cb3b34a7b2ab1bf81c4805b153be4fbc62d7c40277dac1e596cb58b15a6bd48e I0327 15:22:48.823600 1 recorder.go:75] Recording config/pod/openshift-network-operator/logs/iptables-alerter-wnlc4/iptables-alerter_current.log with fingerprint=5017af120f41149b86dfb6f20fdb1b32cf4ae8b89f3c38d8b105e030954a2e76 I0327 15:22:48.823606 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 23.048846441s to process 85 records E0327 15:22:48.823677 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 23.053s with: function \"machine_healthchecks\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"machines\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error, unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-kxfc4.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-n49jb.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-multus/multus-s5dwn.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json\", unable to record function \"operators_pods_and_events\" record \"config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2.json\"" I0327 15:22:48.824793 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machines" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error, unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-kxfc4.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-n49jb.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-multus/multus-s5dwn.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-ovn-kubernetes/ovnkube-node-2kmgp.json", unable to record function "operators_pods_and_events" record "config/pod/openshift-ovn-kubernetes/ovnkube-node-zpdq2.json" I0327 15:22:48.824810 1 periodic.go:209] Running workloads gatherer I0327 15:22:48.824826 1 tasks_processing.go:45] number of workers: 2 I0327 15:22:48.824832 1 tasks_processing.go:69] worker 1 listening for tasks. I0327 15:22:48.824837 1 tasks_processing.go:71] worker 1 working on workload_info task. I0327 15:22:48.824843 1 tasks_processing.go:69] worker 0 listening for tasks. I0327 15:22:48.824862 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0327 15:22:48.851391 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 21s for image data I0327 15:22:48.859433 1 tasks_processing.go:74] worker 0 stopped. I0327 15:22:48.859453 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 34.547578ms to process 0 records I0327 15:22:48.862523 1 gather_workloads_info.go:387] No image sha256:80748ba08e1c264a8c105e7f607eff386a66378e024443a844993ee9292858c1 (12ms) I0327 15:22:48.872733 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (10ms) I0327 15:22:48.885254 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (12ms) I0327 15:22:48.897051 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (12ms) I0327 15:22:48.907200 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (10ms) I0327 15:22:48.917672 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (10ms) I0327 15:22:48.927411 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (10ms) I0327 15:22:48.941309 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (14ms) I0327 15:22:48.951133 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (10ms) I0327 15:22:48.960644 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (9ms) I0327 15:22:48.971006 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (10ms) I0327 15:22:49.061398 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (90ms) I0327 15:22:49.163272 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (102ms) I0327 15:22:49.262285 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (99ms) I0327 15:22:49.362332 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (100ms) I0327 15:22:49.462384 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (100ms) I0327 15:22:49.564134 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (102ms) I0327 15:22:49.662091 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (98ms) I0327 15:22:49.762343 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (100ms) I0327 15:22:49.867483 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (105ms) I0327 15:22:49.961616 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (94ms) I0327 15:22:50.062081 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (100ms) I0327 15:22:50.161976 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (100ms) I0327 15:22:50.263283 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (101ms) I0327 15:22:50.364990 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (102ms) I0327 15:22:50.471114 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (106ms) I0327 15:22:50.569192 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (98ms) I0327 15:22:50.665510 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (96ms) I0327 15:22:50.761960 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (96ms) I0327 15:22:50.862169 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (100ms) I0327 15:22:50.961948 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (100ms) I0327 15:22:51.062280 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (100ms) I0327 15:22:51.062317 1 tasks_processing.go:74] worker 1 stopped. E0327 15:22:51.062327 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0327 15:22:51.062633 1 recorder.go:75] Recording config/workload_info with fingerprint=c1f362e725e213909c97bea3ab8a147d6417f542165038b98ea0838537fd311c I0327 15:22:51.062648 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.237470332s to process 1 records E0327 15:22:51.062672 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.237s with: function \"workload_info\" failed with an error" I0327 15:22:51.063781 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0327 15:22:51.063796 1 periodic.go:209] Running conditional gatherer I0327 15:22:51.070329 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0327 15:22:51.076881 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.15:38651->172.30.0.10:53: read: connection refused E0327 15:22:51.077133 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0327 15:22:51.077201 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0327 15:22:51.085023 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0327 15:22:51.085042 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085049 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085052 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085055 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085058 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085062 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085065 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085068 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0327 15:22:51.085070 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0327 15:22:51.085088 1 tasks_processing.go:45] number of workers: 3 I0327 15:22:51.085109 1 tasks_processing.go:69] worker 2 listening for tasks. I0327 15:22:51.085116 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0327 15:22:51.085118 1 tasks_processing.go:69] worker 0 listening for tasks. I0327 15:22:51.085130 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0327 15:22:51.085130 1 tasks_processing.go:69] worker 1 listening for tasks. I0327 15:22:51.085140 1 tasks_processing.go:74] worker 1 stopped. I0327 15:22:51.085148 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0327 15:22:51.085223 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0327 15:22:51.085237 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 877ns to process 1 records I0327 15:22:51.085272 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0327 15:22:51.085281 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.299µs to process 1 records I0327 15:22:51.085288 1 tasks_processing.go:74] worker 0 stopped. I0327 15:22:51.085484 1 tasks_processing.go:74] worker 2 stopped. I0327 15:22:51.085498 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 319.595µs to process 0 records I0327 15:22:51.085522 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.15:38651->172.30.0.10:53: read: connection refused I0327 15:22:51.085540 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0327 15:22:51.112393 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=bbeb1004a63a04890205332f79819a15e6015caf7c584dec7a5fa99fb0b20290 I0327 15:22:51.112553 1 diskrecorder.go:70] Writing 179 records to /var/lib/insights-operator/insights-2026-03-27-152251.tar.gz I0327 15:22:51.131057 1 diskrecorder.go:51] Wrote 179 records to disk in 18ms I0327 15:22:51.131109 1 periodic.go:278] Gathering cluster info every 2h0m0s I0327 15:22:51.131139 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0327 15:22:56.212003 1 configmapobserver.go:84] configmaps "insights-config" not found I0327 15:23:29.859847 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="c4c15b8a9a1313394cbc92bf5c3ecc2815287303353c93b91af1873ccbbf670b") W0327 15:23:29.859881 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0327 15:23:29.859927 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="4297af86703570a0e583f0f08c8ff1f2b10a6d085db5361776de3db6313d0d16") I0327 15:23:29.859962 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0327 15:23:29.859987 1 periodic.go:170] Shutting down I0327 15:23:29.860004 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="e0986abee50628759f980d365b8b817f0598bfba9dc0fb861da08036ff1039c8") I0327 15:23:29.860006 1 base_controller.go:181] Shutting down LoggingSyncer ... I0327 15:23:29.860017 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0327 15:23:29.860025 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0327 15:23:29.860032 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0327 15:23:29.860041 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0327 15:23:29.859972 1 base_controller.go:181] Shutting down ConfigController ...