W0420 07:44:34.379012 1 cmd.go:257] Using insecure, self-signed certificates I0420 07:44:34.562005 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:34.562309 1 observer_polling.go:159] Starting file observer I0420 07:44:34.990070 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0420 07:44:34.990327 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0420 07:44:34.991184 1 secure_serving.go:57] Forcing use of http/1.1 only W0420 07:44:34.991230 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0420 07:44:34.991236 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0420 07:44:34.991251 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0420 07:44:34.991256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0420 07:44:34.991261 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0420 07:44:34.991265 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0420 07:44:34.992519 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0420 07:44:34.995750 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0420 07:44:34.995764 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0420 07:44:34.995777 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0420 07:44:34.995790 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0420 07:44:34.995830 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0420 07:44:34.995779 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0420 07:44:34.995837 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"cd42aef8-8d36-4af6-9ac2-14b489a9cbdf", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0420 07:44:34.995850 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0420 07:44:34.995947 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-4283435476/tls.crt::/tmp/serving-cert-4283435476/tls.key" I0420 07:44:34.996219 1 secure_serving.go:213] Serving securely on [::]:8443 I0420 07:44:34.996263 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0420 07:44:35.001496 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0420 07:44:35.001518 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0420 07:44:35.001649 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0420 07:44:35.005393 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0420 07:44:35.005410 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0420 07:44:35.008658 1 secretconfigobserver.go:119] support secret does not exist I0420 07:44:35.012042 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0420 07:44:35.015128 1 secretconfigobserver.go:119] support secret does not exist I0420 07:44:35.018932 1 recorder.go:161] Pruning old reports every 5h40m27s, max age is 288h0m0s I0420 07:44:35.023561 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0420 07:44:35.023584 1 periodic.go:209] Running clusterconfig gatherer I0420 07:44:35.023587 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0420 07:44:35.023600 1 insightsreport.go:296] Starting report retriever I0420 07:44:35.023607 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0420 07:44:35.023646 1 tasks_processing.go:45] number of workers: 64 I0420 07:44:35.023674 1 tasks_processing.go:69] worker 8 listening for tasks. I0420 07:44:35.023584 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0420 07:44:35.023685 1 tasks_processing.go:69] worker 23 listening for tasks. I0420 07:44:35.023688 1 tasks_processing.go:71] worker 8 working on container_runtime_configs task. I0420 07:44:35.023691 1 tasks_processing.go:71] worker 23 working on pod_network_connectivity_checks task. I0420 07:44:35.023699 1 tasks_processing.go:69] worker 4 listening for tasks. I0420 07:44:35.023699 1 tasks_processing.go:69] worker 3 listening for tasks. I0420 07:44:35.023707 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 07:44:35.023715 1 tasks_processing.go:69] worker 7 listening for tasks. I0420 07:44:35.023718 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 07:44:35.023724 1 tasks_processing.go:69] worker 43 listening for tasks. I0420 07:44:35.023721 1 tasks_processing.go:69] worker 6 listening for tasks. I0420 07:44:35.023730 1 tasks_processing.go:69] worker 2 listening for tasks. I0420 07:44:35.023732 1 tasks_processing.go:69] worker 24 listening for tasks. I0420 07:44:35.023739 1 tasks_processing.go:69] worker 25 listening for tasks. I0420 07:44:35.023736 1 tasks_processing.go:69] worker 5 listening for tasks. I0420 07:44:35.023747 1 tasks_processing.go:69] worker 39 listening for tasks. I0420 07:44:35.023742 1 tasks_processing.go:69] worker 34 listening for tasks. I0420 07:44:35.023751 1 tasks_processing.go:69] worker 9 listening for tasks. I0420 07:44:35.023750 1 tasks_processing.go:69] worker 53 listening for tasks. I0420 07:44:35.023761 1 tasks_processing.go:69] worker 27 listening for tasks. I0420 07:44:35.023762 1 tasks_processing.go:69] worker 42 listening for tasks. I0420 07:44:35.023762 1 tasks_processing.go:69] worker 41 listening for tasks. I0420 07:44:35.023763 1 tasks_processing.go:69] worker 38 listening for tasks. I0420 07:44:35.023770 1 tasks_processing.go:69] worker 11 listening for tasks. I0420 07:44:35.023775 1 tasks_processing.go:69] worker 28 listening for tasks. I0420 07:44:35.023777 1 tasks_processing.go:69] worker 13 listening for tasks. I0420 07:44:35.023777 1 tasks_processing.go:69] worker 45 listening for tasks. I0420 07:44:35.023780 1 tasks_processing.go:69] worker 37 listening for tasks. I0420 07:44:35.023783 1 tasks_processing.go:69] worker 12 listening for tasks. I0420 07:44:35.023782 1 tasks_processing.go:69] worker 14 listening for tasks. I0420 07:44:35.023773 1 tasks_processing.go:69] worker 58 listening for tasks. I0420 07:44:35.023750 1 tasks_processing.go:69] worker 26 listening for tasks. I0420 07:44:35.023791 1 tasks_processing.go:69] worker 46 listening for tasks. I0420 07:44:35.023756 1 tasks_processing.go:69] worker 40 listening for tasks. I0420 07:44:35.023795 1 tasks_processing.go:69] worker 36 listening for tasks. I0420 07:44:35.023822 1 tasks_processing.go:69] worker 31 listening for tasks. I0420 07:44:35.023763 1 tasks_processing.go:69] worker 10 listening for tasks. I0420 07:44:35.023770 1 tasks_processing.go:69] worker 44 listening for tasks. I0420 07:44:35.023786 1 tasks_processing.go:69] worker 29 listening for tasks. I0420 07:44:35.023791 1 tasks_processing.go:69] worker 21 listening for tasks. I0420 07:44:35.023745 1 tasks_processing.go:69] worker 15 listening for tasks. I0420 07:44:35.023796 1 tasks_processing.go:69] worker 30 listening for tasks. I0420 07:44:35.023796 1 tasks_processing.go:69] worker 20 listening for tasks. I0420 07:44:35.023816 1 tasks_processing.go:69] worker 16 listening for tasks. I0420 07:44:35.023826 1 tasks_processing.go:69] worker 18 listening for tasks. I0420 07:44:35.023878 1 tasks_processing.go:69] worker 56 listening for tasks. I0420 07:44:35.023893 1 tasks_processing.go:69] worker 63 listening for tasks. I0420 07:44:35.023893 1 tasks_processing.go:69] worker 55 listening for tasks. I0420 07:44:35.023901 1 tasks_processing.go:69] worker 33 listening for tasks. I0420 07:44:35.023900 1 tasks_processing.go:69] worker 54 listening for tasks. I0420 07:44:35.023905 1 tasks_processing.go:69] worker 49 listening for tasks. I0420 07:44:35.023906 1 tasks_processing.go:69] worker 57 listening for tasks. I0420 07:44:35.023912 1 tasks_processing.go:69] worker 51 listening for tasks. I0420 07:44:35.023919 1 tasks_processing.go:71] worker 9 working on openshift_machine_api_events task. I0420 07:44:35.023920 1 tasks_processing.go:69] worker 48 listening for tasks. I0420 07:44:35.023923 1 tasks_processing.go:69] worker 52 listening for tasks. I0420 07:44:35.023929 1 tasks_processing.go:71] worker 48 working on oauths task. I0420 07:44:35.023925 1 tasks_processing.go:71] worker 51 working on install_plans task. I0420 07:44:35.023933 1 tasks_processing.go:71] worker 27 working on overlapping_namespace_uids task. I0420 07:44:35.023934 1 tasks_processing.go:69] worker 60 listening for tasks. I0420 07:44:35.023944 1 tasks_processing.go:71] worker 6 working on cluster_apiserver task. I0420 07:44:35.023954 1 tasks_processing.go:69] worker 50 listening for tasks. I0420 07:44:35.023968 1 tasks_processing.go:69] worker 59 listening for tasks. I0420 07:44:35.023976 1 tasks_processing.go:71] worker 25 working on clusterroles task. I0420 07:44:35.023982 1 tasks_processing.go:69] worker 62 listening for tasks. I0420 07:44:35.023985 1 tasks_processing.go:71] worker 2 working on validating_webhook_configurations task. I0420 07:44:35.023992 1 tasks_processing.go:71] worker 39 working on machines task. I0420 07:44:35.023995 1 tasks_processing.go:71] worker 5 working on qemu_kubevirt_launcher_logs task. I0420 07:44:35.024010 1 tasks_processing.go:69] worker 61 listening for tasks. I0420 07:44:35.024024 1 tasks_processing.go:69] worker 32 listening for tasks. I0420 07:44:35.024029 1 tasks_processing.go:71] worker 24 working on operators task. I0420 07:44:35.024037 1 tasks_processing.go:71] worker 34 working on ingress_certificates task. I0420 07:44:35.024058 1 tasks_processing.go:69] worker 47 listening for tasks. I0420 07:44:35.023929 1 tasks_processing.go:71] worker 53 working on crds task. I0420 07:44:35.024078 1 tasks_processing.go:69] worker 22 listening for tasks. I0420 07:44:35.024079 1 tasks_processing.go:71] worker 42 working on support_secret task. I0420 07:44:35.024094 1 tasks_processing.go:71] worker 38 working on jaegers task. I0420 07:44:35.024217 1 tasks_processing.go:71] worker 4 working on authentication task. I0420 07:44:35.024309 1 tasks_processing.go:71] worker 10 working on machine_healthchecks task. I0420 07:44:35.024320 1 tasks_processing.go:69] worker 35 listening for tasks. I0420 07:44:35.024326 1 tasks_processing.go:69] worker 19 listening for tasks. I0420 07:44:35.024339 1 tasks_processing.go:71] worker 18 working on feature_gates task. I0420 07:44:35.024347 1 tasks_processing.go:71] worker 7 working on storage_cluster task. I0420 07:44:35.024393 1 tasks_processing.go:71] worker 1 working on machine_autoscalers task. I0420 07:44:35.024441 1 tasks_processing.go:71] worker 12 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0420 07:44:35.024075 1 tasks_processing.go:71] worker 31 working on image_registries task. I0420 07:44:35.024476 1 tasks_processing.go:71] worker 19 working on machine_sets task. I0420 07:44:35.024485 1 tasks_processing.go:71] worker 29 working on node_logs task. I0420 07:44:35.024507 1 tasks_processing.go:71] worker 13 working on openstack_dataplanenodesets task. I0420 07:44:35.024564 1 tasks_processing.go:71] worker 57 working on openshift_logging task. I0420 07:44:35.024611 1 tasks_processing.go:71] worker 28 working on aggregated_monitoring_cr_names task. I0420 07:44:35.024623 1 tasks_processing.go:71] worker 22 working on machine_configs task. I0420 07:44:35.024653 1 tasks_processing.go:71] worker 59 working on tsdb_status task. I0420 07:44:35.024610 1 tasks_processing.go:71] worker 61 working on silenced_alerts task. I0420 07:44:35.024480 1 tasks_processing.go:71] worker 44 working on sap_datahubs task. I0420 07:44:35.024541 1 tasks_processing.go:71] worker 56 working on pdbs task. I0420 07:44:35.024534 1 tasks_processing.go:71] worker 11 working on openstack_dataplanedeployments task. I0420 07:44:35.024544 1 tasks_processing.go:71] worker 0 working on sap_pods task. I0420 07:44:35.024547 1 tasks_processing.go:71] worker 63 working on version task. I0420 07:44:35.024551 1 tasks_processing.go:71] worker 14 working on infrastructures task. I0420 07:44:35.024552 1 tasks_processing.go:71] worker 40 working on active_alerts task. W0420 07:44:35.024995 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:35.024553 1 tasks_processing.go:71] worker 55 working on image task. I0420 07:44:35.025021 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 33.408µs to process 0 records I0420 07:44:35.024557 1 tasks_processing.go:71] worker 33 working on openstack_version task. I0420 07:44:35.025081 1 tasks_processing.go:69] worker 17 listening for tasks. I0420 07:44:35.025098 1 tasks_processing.go:71] worker 17 working on sap_config task. I0420 07:44:35.024555 1 tasks_processing.go:71] worker 58 working on metrics task. I0420 07:44:35.024557 1 tasks_processing.go:71] worker 46 working on nodenetworkstates task. I0420 07:44:35.024560 1 tasks_processing.go:71] worker 54 working on cost_management_metrics_configs task. W0420 07:44:35.025236 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:35.025249 1 tasks_processing.go:74] worker 58 stopped. I0420 07:44:35.024565 1 tasks_processing.go:71] worker 36 working on ingress task. I0420 07:44:35.025257 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 38.883µs to process 0 records I0420 07:44:35.024564 1 tasks_processing.go:71] worker 49 working on nodenetworkconfigurationpolicies task. I0420 07:44:35.024571 1 tasks_processing.go:71] worker 30 working on olm_operators task. I0420 07:44:35.024572 1 tasks_processing.go:71] worker 45 working on storage_classes task. I0420 07:44:35.024575 1 tasks_processing.go:71] worker 21 working on image_pruners task. I0420 07:44:35.024579 1 tasks_processing.go:71] worker 43 working on mutating_webhook_configurations task. I0420 07:44:35.024579 1 tasks_processing.go:71] worker 15 working on networks task. I0420 07:44:35.024584 1 tasks_processing.go:71] worker 37 working on nodes task. I0420 07:44:35.024590 1 tasks_processing.go:71] worker 62 working on openstack_controlplanes task. I0420 07:44:35.024595 1 tasks_processing.go:71] worker 50 working on lokistack task. I0420 07:44:35.024593 1 tasks_processing.go:71] worker 60 working on monitoring_persistent_volumes task. I0420 07:44:35.024594 1 tasks_processing.go:71] worker 52 working on operators_pods_and_events task. I0420 07:44:35.024601 1 tasks_processing.go:71] worker 16 working on ceph_cluster task. I0420 07:44:35.024604 1 tasks_processing.go:71] worker 20 working on machine_config_pools task. I0420 07:44:35.024607 1 tasks_processing.go:71] worker 47 working on container_images task. I0420 07:44:35.024084 1 tasks_processing.go:71] worker 41 working on dvo_metrics task. I0420 07:44:35.024617 1 tasks_processing.go:71] worker 32 working on schedulers task. I0420 07:44:35.024624 1 tasks_processing.go:71] worker 35 working on config_maps task. W0420 07:44:35.024684 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:35.026211 1 tasks_processing.go:74] worker 59 stopped. I0420 07:44:35.026222 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 1.541463ms to process 0 records W0420 07:44:35.024694 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:35.024537 1 tasks_processing.go:71] worker 3 working on proxies task. I0420 07:44:35.026233 1 tasks_processing.go:74] worker 61 stopped. I0420 07:44:35.024545 1 tasks_processing.go:71] worker 26 working on certificate_signing_requests task. I0420 07:44:35.026244 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 1.553142ms to process 0 records I0420 07:44:35.025011 1 tasks_processing.go:71] worker 40 working on service_accounts task. I0420 07:44:35.027127 1 tasks_processing.go:74] worker 8 stopped. I0420 07:44:35.027143 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 3.429933ms to process 0 records E0420 07:44:35.027154 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0420 07:44:35.027160 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 3.44136ms to process 0 records I0420 07:44:35.027165 1 tasks_processing.go:74] worker 23 stopped. I0420 07:44:35.028550 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0420 07:44:35.028569 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0420 07:44:35.028573 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0420 07:44:35.028576 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0420 07:44:35.028589 1 controller.go:489] The operator is still being initialized I0420 07:44:35.028596 1 controller.go:512] The operator is healthy I0420 07:44:35.028635 1 tasks_processing.go:74] worker 39 stopped. E0420 07:44:35.028649 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0420 07:44:35.028666 1 gather.go:177] gatherer "clusterconfig" function "machines" took 4.628682ms to process 0 records I0420 07:44:35.032104 1 tasks_processing.go:74] worker 6 stopped. I0420 07:44:35.032793 1 recorder.go:75] Recording config/apiserver with fingerprint=7085ca9a0415bd33c5ccab5d5fba55bdcdec9576c007315df160585b985278f3 I0420 07:44:35.032878 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 8.148212ms to process 1 records I0420 07:44:35.032902 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 8.268071ms to process 0 records I0420 07:44:35.032911 1 tasks_processing.go:74] worker 7 stopped. I0420 07:44:35.034587 1 tasks_processing.go:74] worker 27 stopped. I0420 07:44:35.034614 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0420 07:44:35.034622 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 10.640223ms to process 1 records I0420 07:44:35.046401 1 tasks_processing.go:74] worker 38 stopped. I0420 07:44:35.046412 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 22.298734ms to process 0 records I0420 07:44:35.046421 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 21.979534ms to process 0 records E0420 07:44:35.046428 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0420 07:44:35.046436 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 22.092727ms to process 0 records I0420 07:44:35.046444 1 tasks_processing.go:74] worker 10 stopped. I0420 07:44:35.046448 1 tasks_processing.go:74] worker 1 stopped. I0420 07:44:35.046519 1 tasks_processing.go:74] worker 19 stopped. I0420 07:44:35.046537 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 22.028324ms to process 0 records I0420 07:44:35.046547 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 21.692957ms to process 0 records I0420 07:44:35.046556 1 tasks_processing.go:74] worker 0 stopped. I0420 07:44:35.046558 1 tasks_processing.go:74] worker 54 stopped. I0420 07:44:35.046570 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 21.314824ms to process 0 records I0420 07:44:35.046660 1 tasks_processing.go:74] worker 29 stopped. I0420 07:44:35.046673 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 22.165942ms to process 0 records I0420 07:44:35.046766 1 tasks_processing.go:74] worker 42 stopped. E0420 07:44:35.046784 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0420 07:44:35.046795 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 22.668631ms to process 0 records I0420 07:44:35.046958 1 tasks_processing.go:74] worker 33 stopped. I0420 07:44:35.047028 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 21.870593ms to process 0 records I0420 07:44:35.047067 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 21.736123ms to process 0 records I0420 07:44:35.047111 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 21.694852ms to process 0 records I0420 07:44:35.047259 1 tasks_processing.go:74] worker 46 stopped. I0420 07:44:35.047277 1 tasks_processing.go:74] worker 49 stopped. I0420 07:44:35.047323 1 gather_logs.go:145] no pods in namespace were found I0420 07:44:35.047418 1 recorder.go:75] Recording config/oauth with fingerprint=487314d73d39f213b548bf7f95b02ae61c7e081348375de18dd44d2e894226a7 I0420 07:44:35.047428 1 tasks_processing.go:74] worker 48 stopped. I0420 07:44:35.047433 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 23.109821ms to process 1 records I0420 07:44:35.047442 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 23.148909ms to process 0 records I0420 07:44:35.047451 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 22.622216ms to process 0 records I0420 07:44:35.047472 1 tasks_processing.go:74] worker 9 stopped. I0420 07:44:35.047519 1 tasks_processing.go:74] worker 13 stopped. I0420 07:44:35.047732 1 tasks_processing.go:74] worker 14 stopped. I0420 07:44:35.048238 1 recorder.go:75] Recording config/infrastructure with fingerprint=0f01916ae3796f31bb12ca48403ff362cde99221bca791082d48fc4579bf615a I0420 07:44:35.048253 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 22.322582ms to process 1 records I0420 07:44:35.048304 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=1c42fe21ac62a298979e89ca2004df2f997ae3ffedceb1aaaf2dd8cf338be863 I0420 07:44:35.048313 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 21.144105ms to process 1 records I0420 07:44:35.048318 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 23.328491ms to process 0 records I0420 07:44:35.048322 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 21.696882ms to process 0 records I0420 07:44:35.048325 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 22.578653ms to process 0 records I0420 07:44:35.048329 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 22.649782ms to process 0 records I0420 07:44:35.048337 1 tasks_processing.go:74] worker 32 stopped. I0420 07:44:35.048351 1 tasks_processing.go:74] worker 11 stopped. I0420 07:44:35.048352 1 tasks_processing.go:74] worker 5 stopped. I0420 07:44:35.048342 1 tasks_processing.go:74] worker 60 stopped. I0420 07:44:35.048365 1 tasks_processing.go:74] worker 44 stopped. I0420 07:44:35.048396 1 tasks_processing.go:74] worker 18 stopped. I0420 07:44:35.048448 1 recorder.go:75] Recording config/featuregate with fingerprint=67c8255b5b45ae7e2d4eb4c6f73f9b6b2ea523c6a335738ea41c0742790f2fb9 I0420 07:44:35.048457 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 23.119121ms to process 1 records I0420 07:44:35.048534 1 tasks_processing.go:74] worker 4 stopped. I0420 07:44:35.048597 1 recorder.go:75] Recording config/authentication with fingerprint=b364fda4046ceab1894c9143ca06e56d900c68e8fff54a26fa67a1ce8207a24c I0420 07:44:35.048609 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 23.444977ms to process 1 records I0420 07:44:35.048689 1 tasks_processing.go:74] worker 31 stopped. I0420 07:44:35.049106 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=74e127db942b704eca4f65d96f6d2c3b94028aaf802c880e278984fc1ee7afde I0420 07:44:35.049117 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 23.466ms to process 1 records I0420 07:44:35.049178 1 recorder.go:75] Recording config/proxy with fingerprint=3d1a7048df04797913a379e0b67a1863a316f74490ad8ea2061545084637fc41 I0420 07:44:35.049188 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 21.75511ms to process 1 records I0420 07:44:35.049198 1 tasks_processing.go:74] worker 3 stopped. I0420 07:44:35.049254 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=93eff0bddbb053ea9eefa7cddbdea53861ca3b51d9cafe4afc6af1fd26534807 I0420 07:44:35.049263 1 tasks_processing.go:74] worker 21 stopped. I0420 07:44:35.049266 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 22.709124ms to process 1 records I0420 07:44:35.049345 1 tasks_processing.go:74] worker 2 stopped. I0420 07:44:35.049400 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=e4293314abdff93f949c7b709d505e6ead8362b482bdaf425d18916b9161aab1 I0420 07:44:35.049475 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=7296f18d22e776cb2bd9d1ff317ea68c413323b4ea8dd7d192d494b796518815 I0420 07:44:35.049494 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=31e68f49118c9f09560e5616564a7c8d52471e6e1a5983ee62c8ff9741d54126 I0420 07:44:35.049518 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=d1130b8b3ba5d044a8ae2ab7552a3689c9ef327fffa2d7761448141be420a4cb I0420 07:44:35.049541 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=f6e6eeb400ce0f92de6d139a8f6419d61a64fd2aca4b410af8bd6b6263be1d77 I0420 07:44:35.049566 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=6105903d23bf71fcadbe568c56c8e678bf61aed357db444584daad182052fe92 I0420 07:44:35.049590 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=d07d8cb86c04d3e245600d110039bb4a8783573e3ac20358bb7ada9d7aa2b99f I0420 07:44:35.049622 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=655d257afc5770ede6254a83a501f4b0f3b2b42584a6b4313c33887f42bfded3 I0420 07:44:35.049645 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=7050f04079049912b65fa73bd9081d465d3aaffddc880ae8262d4d5ddcaf0cda I0420 07:44:35.049673 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=57516994c0a5a04aacde7e139a753dc98bc60c592b764b8812f2f932664ff82c I0420 07:44:35.049696 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=70b2a90b5869eb53485643307c0314ea87d5fb16c1c551e31e095cbe30bd0c9e I0420 07:44:35.049702 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 24.26311ms to process 11 records I0420 07:44:35.053145 1 tasks_processing.go:74] worker 16 stopped. I0420 07:44:35.053162 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 27.368305ms to process 0 records I0420 07:44:35.053181 1 tasks_processing.go:74] worker 57 stopped. I0420 07:44:35.053193 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 28.561586ms to process 0 records I0420 07:44:35.053205 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 27.709096ms to process 0 records I0420 07:44:35.053213 1 tasks_processing.go:74] worker 62 stopped. I0420 07:44:35.053383 1 tasks_processing.go:74] worker 43 stopped. I0420 07:44:35.054374 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=2ea607bb029fcd2ff71e05f8a46acf3393c95e2051da21e21eae48a6b8538dbe I0420 07:44:35.054491 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=8ed42748c6678156f4dca57c38a072d2f19b69058292db18a73362d922901a0a I0420 07:44:35.054560 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=e75094b25906a9351ee6bfecd3acc7b406214104a49f3f26cfdf00153af280fe I0420 07:44:35.054577 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 27.954637ms to process 3 records I0420 07:44:35.054951 1 tasks_processing.go:74] worker 15 stopped. I0420 07:44:35.055238 1 recorder.go:75] Recording config/network with fingerprint=fa73c51b1e87b346d810f4353733f9f5ae757d19abd0779213439dc97140f186 I0420 07:44:35.055350 1 gather.go:177] gatherer "clusterconfig" function "networks" took 27.959677ms to process 1 records I0420 07:44:35.055388 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 28.233753ms to process 0 records I0420 07:44:35.055411 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 28.62848ms to process 0 records I0420 07:44:35.055458 1 tasks_processing.go:74] worker 50 stopped. I0420 07:44:35.055465 1 tasks_processing.go:74] worker 17 stopped. I0420 07:44:35.055616 1 tasks_processing.go:74] worker 55 stopped. I0420 07:44:35.056850 1 recorder.go:75] Recording config/image with fingerprint=39105b3f2f62deb5693ab91a36303a8c74eb18634210801e251053718d51322a I0420 07:44:35.056869 1 gather.go:177] gatherer "clusterconfig" function "image" took 28.773624ms to process 1 records I0420 07:44:35.056877 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 28.882606ms to process 0 records I0420 07:44:35.056922 1 tasks_processing.go:74] worker 30 stopped. I0420 07:44:35.056990 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=d1c2816fcc10b1615a41a02308e864252f1546b98b7d5266839527e2113bbf0e I0420 07:44:35.056990 1 tasks_processing.go:74] worker 56 stopped. I0420 07:44:35.057018 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=619401c048c634fc152c0372d970391517c43fccc090a505019a0257d50e756f I0420 07:44:35.057047 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=bb3137483b5ca1751a5051f1a3b753a735a9da5fcfae0664713a6f08a7983b09 I0420 07:44:35.057073 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 29.721433ms to process 3 records I0420 07:44:35.057154 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=6862c2817aafb2cb4b37572a9f0e1003d88ef16b0c6ef2d69503f9367af294a0 I0420 07:44:35.057178 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=fbbef904f0c4e147b4714e305f79c17ec376f6e2da7eb01d0605a9602230dcec I0420 07:44:35.057185 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 29.955596ms to process 2 records I0420 07:44:35.057222 1 tasks_processing.go:74] worker 45 stopped. I0420 07:44:35.057291 1 tasks_processing.go:74] worker 36 stopped. I0420 07:44:35.057357 1 recorder.go:75] Recording config/ingress with fingerprint=db0aa45840d5280d05d822ca93d1cc57b07903c5f205dcb5d8c888e2f2e553e2 I0420 07:44:35.057374 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 31.452582ms to process 1 records I0420 07:44:35.059353 1 tasks_processing.go:74] worker 26 stopped. I0420 07:44:35.059373 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 33.105053ms to process 0 records W0420 07:44:35.061757 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 07:44:35.067436 1 tasks_processing.go:74] worker 53 stopped. I0420 07:44:35.068835 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=e7f8961410afe54c6383bf8f798be58826ae83f08c9891be37e9a76afd337153 I0420 07:44:35.069048 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=499c8b93aed0cfbd4563b1afecfcdce78d1073b3c5b918dd034cda63fde17481 I0420 07:44:35.069058 1 gather.go:177] gatherer "clusterconfig" function "crds" took 43.349692ms to process 2 records I0420 07:44:35.071629 1 tasks_processing.go:74] worker 12 stopped. I0420 07:44:35.071641 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 47.168345ms to process 0 records I0420 07:44:35.071745 1 tasks_processing.go:74] worker 25 stopped. I0420 07:44:35.071925 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=4ad9cbfbe832f3492302a56c4d4124cb362e843b6de2d87b252d5f7341725388 I0420 07:44:35.072007 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=bb04dcc41fd26f1c961999d5debff165a9246734f97c3c26ec8ab3b1be66f08c I0420 07:44:35.072015 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 47.752395ms to process 2 records I0420 07:44:35.076094 1 tasks_processing.go:74] worker 28 stopped. I0420 07:44:35.076109 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 51.452588ms to process 0 records I0420 07:44:35.078893 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0420 07:44:35.078922 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0420 07:44:35.079015 1 operator.go:288] started I0420 07:44:35.079040 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0420 07:44:35.087355 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0420 07:44:35.087370 1 controller.go:212] Source scaController *sca.Controller is not ready I0420 07:44:35.087373 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0420 07:44:35.087378 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0420 07:44:35.087382 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0420 07:44:35.087404 1 controller.go:489] The operator is still being initialized I0420 07:44:35.087415 1 controller.go:512] The operator is healthy I0420 07:44:35.090455 1 prometheus_rules.go:88] Prometheus rules successfully created I0420 07:44:35.093559 1 tasks_processing.go:74] worker 35 stopped. E0420 07:44:35.093574 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0420 07:44:35.093580 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0420 07:44:35.093584 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0420 07:44:35.093594 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0420 07:44:35.093630 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0420 07:44:35.093642 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0420 07:44:35.093649 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0420 07:44:35.093656 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0420 07:44:35.093725 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0420 07:44:35.093734 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0420 07:44:35.093740 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 67.389955ms to process 7 records E0420 07:44:35.094914 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27913efe1d-81ec-4e1a-9459-cf0a958a7956%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:58016->172.30.0.10:53: read: connection refused I0420 07:44:35.094931 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27913efe1d-81ec-4e1a-9459-cf0a958a7956%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:58016->172.30.0.10:53: read: connection refused I0420 07:44:35.095920 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0420 07:44:35.095934 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0420 07:44:35.095950 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0420 07:44:35.102087 1 base_controller.go:82] Caches are synced for ConfigController I0420 07:44:35.102101 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0420 07:44:35.120925 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 07:44:35.138884 1 tasks_processing.go:74] worker 20 stopped. I0420 07:44:35.138905 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 112.988156ms to process 0 records I0420 07:44:35.140337 1 tasks_processing.go:74] worker 63 stopped. I0420 07:44:35.140685 1 recorder.go:75] Recording config/version with fingerprint=4802919f12f23ca4cc0ef8c8d2829a6eef826701ae3c14761f7f9268b9e310c9 I0420 07:44:35.140703 1 recorder.go:75] Recording config/id with fingerprint=f83e3ced11f8fb7349e899af72acb97d605b998c862dad36fca9b3f1b9ebe347 I0420 07:44:35.140728 1 gather.go:177] gatherer "clusterconfig" function "version" took 115.446196ms to process 2 records I0420 07:44:35.142098 1 tasks_processing.go:74] worker 37 stopped. I0420 07:44:35.142399 1 recorder.go:75] Recording config/node/ip-10-0-0-252.ec2.internal with fingerprint=55f5900690adee7e27326aab100e4f6659a8e390bfe118e337435355db3bf473 I0420 07:44:35.142472 1 recorder.go:75] Recording config/node/ip-10-0-1-36.ec2.internal with fingerprint=0bf8d7d5ff1aaabc107a6de711a48471a15ea1f8f2ffab22fb8af6f69c51763a I0420 07:44:35.142523 1 recorder.go:75] Recording config/node/ip-10-0-2-222.ec2.internal with fingerprint=39db51f2c4a1271561f37b437517c87bf79c526945134c82ab733f031c5b98c0 I0420 07:44:35.142533 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 116.627681ms to process 3 records I0420 07:44:35.144010 1 tasks_processing.go:74] worker 22 stopped. I0420 07:44:35.144039 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0420 07:44:35.144047 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 119.366519ms to process 1 records I0420 07:44:35.151832 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0420 07:44:35.157685 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:41100->172.30.0.10:53: read: connection refused I0420 07:44:35.157702 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.11:41100->172.30.0.10:53: read: connection refused I0420 07:44:35.159258 1 tasks_processing.go:74] worker 47 stopped. I0420 07:44:35.159345 1 recorder.go:75] Recording config/running_containers with fingerprint=e29161611c65254e8de24259861375737e020d41e97db313c015d50ec213ce7f I0420 07:44:35.159364 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 133.312306ms to process 1 records I0420 07:44:35.179340 1 base_controller.go:82] Caches are synced for LoggingSyncer I0420 07:44:35.179356 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0420 07:44:35.184910 1 tasks_processing.go:74] worker 34 stopped. E0420 07:44:35.184927 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0420 07:44:35.184934 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ppoejm08poh4n8vgkpisbjtgiafe9am-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ppoejm08poh4n8vgkpisbjtgiafe9am-primary-cert-bundle-secret" not found I0420 07:44:35.185004 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=12832bc969a39691f78a062fb655fe946a632cbd2f3412a83b2bfeb0f0d9735e I0420 07:44:35.185017 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 160.856474ms to process 1 records I0420 07:44:35.486763 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0420 07:44:35.486779 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0420 07:44:35.486992 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-47n6r pod in namespace openshift-dns (previous: false). I0420 07:44:35.711320 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-47n6r pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-47n6r\" is waiting to start: ContainerCreating" I0420 07:44:35.711340 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-47n6r\" is waiting to start: ContainerCreating" I0420 07:44:35.711348 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-47n6r pod in namespace openshift-dns (previous: false). I0420 07:44:35.891433 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-47n6r pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-47n6r\" is waiting to start: ContainerCreating" I0420 07:44:35.891455 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-47n6r\" is waiting to start: ContainerCreating" I0420 07:44:35.891479 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-7h4w2 pod in namespace openshift-dns (previous: false). W0420 07:44:36.058180 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 07:44:36.114944 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-7h4w2 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-7h4w2\" is waiting to start: ContainerCreating" I0420 07:44:36.114964 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-7h4w2\" is waiting to start: ContainerCreating" I0420 07:44:36.114972 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-7h4w2 pod in namespace openshift-dns (previous: false). I0420 07:44:36.292172 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-7h4w2 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-7h4w2\" is waiting to start: ContainerCreating" I0420 07:44:36.292191 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-7h4w2\" is waiting to start: ContainerCreating" I0420 07:44:36.292202 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-sq7g5 pod in namespace openshift-dns (previous: false). I0420 07:44:36.481559 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0420 07:44:36.512593 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-sq7g5 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-sq7g5\" is waiting to start: ContainerCreating" I0420 07:44:36.512609 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-sq7g5\" is waiting to start: ContainerCreating" I0420 07:44:36.512617 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-sq7g5 pod in namespace openshift-dns (previous: false). I0420 07:44:36.698336 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-sq7g5 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-sq7g5\" is waiting to start: ContainerCreating" I0420 07:44:36.698353 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-sq7g5\" is waiting to start: ContainerCreating" I0420 07:44:36.698363 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-j4p4f pod in namespace openshift-dns (previous: false). I0420 07:44:36.883180 1 tasks_processing.go:74] worker 24 stopped. I0420 07:44:36.883233 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=4bab3af94aa6ef2f0caef6c7351682f8c691b48a4a187cbe02c023a7c93a8840 I0420 07:44:36.883264 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=c9503ee9b56bb825b4b5d7e1a415b5af385077cfaa200a2dd5a033180360b692 I0420 07:44:36.883297 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0420 07:44:36.883322 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=b201c561bdd3272d99df6f4ae9718d8438d3cfcecbcbdb6fd410b540941d3bcc I0420 07:44:36.883339 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0420 07:44:36.883361 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=0a992aa4eb809129fee14f61f9659f203d0736c7c2ba970b89837e4960a58dc8 I0420 07:44:36.883408 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=2042d951c9c987c1407f3041d80e52737fb5b966c0b7b6f0e5bb9dd35a1e8651 I0420 07:44:36.883431 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=a470146579fb17f2ab35a30eff063b54acc14c7ffbe021a6ed26329ef48760f1 I0420 07:44:36.883445 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=a166067d034deb01345f83e9d41390d450f1fa419824b881b5ff82e4067136cb I0420 07:44:36.883463 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=0376bb216a5f102f820dab405d801963be6f7e1e5fab3f794a14f67b137be407 I0420 07:44:36.883472 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0420 07:44:36.883488 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=8b4983a9a6fc57a38b9b83d566474b5c2bb9bd816c998ba6d597d342fa1b1b5e I0420 07:44:36.883498 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0420 07:44:36.883514 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=cc9acb5dabc608cebe3ff200dfecc416b963aabe7259a7f284ef88f5bf0c083e I0420 07:44:36.883522 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0420 07:44:36.883536 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=5f1eb831ed6c588a28b87280e1c5bbb16d9e31d58349e768e4bb416356eea462 I0420 07:44:36.883546 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0420 07:44:36.883561 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=a74aeb8cedd44afd03569bbb2c694e10f93968131e48324af5e4bd0583f76155 I0420 07:44:36.883680 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=b6d2acbf7a461993a3a191c153a4c89d73726b414a68c271fa943b812a8a378e I0420 07:44:36.883691 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0420 07:44:36.883709 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0420 07:44:36.883732 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0420 07:44:36.883754 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=dc11e51d605775dcddc91948375de87c40c4e12b99a41b39cc2fd3d928962aad I0420 07:44:36.883778 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=271fd3a8cf666beef4d7be7a253ad97e533ad343ef212c675fc4196b0c93f2ea I0420 07:44:36.883787 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0420 07:44:36.883816 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=f1b606720285d0439c51fd4d41b8135765ac57eb80800e6de82fca5446854181 I0420 07:44:36.883826 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0420 07:44:36.883841 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=0dff820391aee8d8c2ffa4645fe72b9dbcb4c3dde57fcd80d9525bd80d8a7005 I0420 07:44:36.883854 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=e562a91c245189681e3e680e467be6ddfdf2c3e7087fab35832ca1964e9b86ef I0420 07:44:36.883868 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=86e29c6e2abeb9f5e3d0c2cecc33d6ef76a65083a913539ecbc6d7341046dadb I0420 07:44:36.883883 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=61c5cb5ba97ac708610b19fc719934649aad15a67d0f92434c43998595f747d4 I0420 07:44:36.883897 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=bb2942702221eaac9423fac008dadae67b8f3edc54139fd03c45bfa36f683bc4 I0420 07:44:36.883930 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=60a5db651052c7cfe25e70501266c763458ba328f79b32bbd2043ac10d696c71 I0420 07:44:36.883946 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0420 07:44:36.883954 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0420 07:44:36.883961 1 gather.go:177] gatherer "clusterconfig" function "operators" took 1.859134028s to process 35 records I0420 07:44:36.893497 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:36.893515 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-jjjtg pod in namespace openshift-dns (previous: false). W0420 07:44:37.058465 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 07:44:37.092025 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:37.092042 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-qkbnw pod in namespace openshift-dns (previous: false). I0420 07:44:37.292406 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:37.292424 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-698fbcfbff-nt4s5 pod in namespace openshift-image-registry (previous: false). I0420 07:44:37.491481 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-698fbcfbff-nt4s5 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-698fbcfbff-nt4s5\" is waiting to start: ContainerCreating" I0420 07:44:37.491498 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-698fbcfbff-nt4s5\" is waiting to start: ContainerCreating" I0420 07:44:37.491508 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7d44746564-4zlzb pod in namespace openshift-image-registry (previous: false). I0420 07:44:37.696433 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7d44746564-4zlzb pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7d44746564-4zlzb\" is waiting to start: ContainerCreating" I0420 07:44:37.696455 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7d44746564-4zlzb\" is waiting to start: ContainerCreating" I0420 07:44:37.696468 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7d44746564-gcjfd pod in namespace openshift-image-registry (previous: false). I0420 07:44:37.891463 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7d44746564-gcjfd pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7d44746564-gcjfd\" is waiting to start: ContainerCreating" I0420 07:44:37.891479 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7d44746564-gcjfd\" is waiting to start: ContainerCreating" I0420 07:44:37.891490 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-l8xmj pod in namespace openshift-image-registry (previous: false). W0420 07:44:38.058548 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 07:44:38.093223 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:38.093243 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-psckh pod in namespace openshift-image-registry (previous: false). I0420 07:44:38.291398 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:38.291416 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-wdc2c pod in namespace openshift-image-registry (previous: false). I0420 07:44:38.492339 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0420 07:44:38.492359 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-57f6759659-qfql9 pod in namespace openshift-ingress (previous: false). I0420 07:44:38.692276 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-57f6759659-qfql9 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-57f6759659-qfql9\" is waiting to start: ContainerCreating" I0420 07:44:38.692294 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-57f6759659-qfql9\" is waiting to start: ContainerCreating" I0420 07:44:38.692305 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7c46f5746b-9j9j6 pod in namespace openshift-ingress (previous: false). I0420 07:44:38.891533 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7c46f5746b-9j9j6 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7c46f5746b-9j9j6\" is waiting to start: ContainerCreating" I0420 07:44:38.891550 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7c46f5746b-9j9j6\" is waiting to start: ContainerCreating" I0420 07:44:38.891561 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-7c46f5746b-c5tlt pod in namespace openshift-ingress (previous: false). W0420 07:44:39.058695 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0420 07:44:39.092617 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-7c46f5746b-c5tlt pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-7c46f5746b-c5tlt\" is waiting to start: ContainerCreating" I0420 07:44:39.092641 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-7c46f5746b-c5tlt\" is waiting to start: ContainerCreating" I0420 07:44:39.092655 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-85lzb pod in namespace openshift-ingress-canary (previous: false). I0420 07:44:39.303481 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-85lzb pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-85lzb\" is waiting to start: ContainerCreating" I0420 07:44:39.303501 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-85lzb\" is waiting to start: ContainerCreating" I0420 07:44:39.303514 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-l4fcx pod in namespace openshift-ingress-canary (previous: false). I0420 07:44:39.492149 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-l4fcx pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-l4fcx\" is waiting to start: ContainerCreating" I0420 07:44:39.492170 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-l4fcx\" is waiting to start: ContainerCreating" I0420 07:44:39.492183 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-rwqk7 pod in namespace openshift-ingress-canary (previous: false). I0420 07:44:39.692318 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-rwqk7 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-rwqk7\" is waiting to start: ContainerCreating" I0420 07:44:39.692339 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-rwqk7\" is waiting to start: ContainerCreating" I0420 07:44:39.692358 1 tasks_processing.go:74] worker 52 stopped. I0420 07:44:39.692492 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=15850d4a2f2499b3088fe95217a2a0e5827622213b909580e5aecd7c60f85c6f I0420 07:44:39.692551 1 recorder.go:75] Recording events/openshift-dns with fingerprint=c3a55b4dd12082ff07aa6f49e363f09719482d2375e40c48d71b92a852c32d29 I0420 07:44:39.692661 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=d5441d54eb625894f254b85bb8ee56c6fb2c676e361dd1a04c1890e51b4c8112 I0420 07:44:39.692700 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=d0e0d4032e8d720a54323b4a929876da860269d553bfd4a73e33738d3cca285a I0420 07:44:39.692751 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=73068796b514f65d0a4ca8cec5f3f4218cd22ef3effdf86be52508ab5e6f46fa I0420 07:44:39.692770 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=f03bb75a2826ff766194a5031cb373f8a0c261f7c3335813491009d184277084 I0420 07:44:39.692779 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.66665782s to process 6 records W0420 07:44:40.058138 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0420 07:44:40.058163 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0420 07:44:40.058178 1 tasks_processing.go:74] worker 41 stopped. E0420 07:44:40.058189 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0420 07:44:40.058199 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0420 07:44:40.058213 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0420 07:44:40.058225 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.032203584s to process 1 records I0420 07:44:47.461410 1 tasks_processing.go:74] worker 51 stopped. I0420 07:44:47.461458 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0420 07:44:47.461474 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.437459278s to process 1 records I0420 07:44:48.184011 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 07:44:48.232917 1 tasks_processing.go:74] worker 40 stopped. I0420 07:44:48.233191 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=5f3315562ab922847b80910efb7b38b1b30885aa3cdda0895bd6e02e66f43fde I0420 07:44:48.233207 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.206643472s to process 1 records E0420 07:44:48.233263 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.209s with: function \"pod_network_connectivity_checks\" failed with an error, function \"machines\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0420 07:44:48.234370 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "machine_healthchecks" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0420 07:44:48.234384 1 periodic.go:209] Running workloads gatherer I0420 07:44:48.234400 1 tasks_processing.go:45] number of workers: 2 I0420 07:44:48.234408 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 07:44:48.234412 1 tasks_processing.go:71] worker 1 working on workload_info task. I0420 07:44:48.234420 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 07:44:48.234443 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0420 07:44:48.260881 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0420 07:44:48.264416 1 tasks_processing.go:74] worker 0 stopped. I0420 07:44:48.264434 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 29.948976ms to process 0 records I0420 07:44:48.270898 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (11ms) I0420 07:44:48.283793 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (13ms) I0420 07:44:48.298507 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (15ms) I0420 07:44:48.305134 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (7ms) I0420 07:44:48.315147 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (10ms) I0420 07:44:48.322128 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (7ms) I0420 07:44:48.328734 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (7ms) I0420 07:44:48.335570 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (7ms) I0420 07:44:48.343304 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (8ms) I0420 07:44:48.354001 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (11ms) I0420 07:44:48.366903 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (13ms) I0420 07:44:48.468729 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (102ms) I0420 07:44:48.567546 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (99ms) I0420 07:44:48.668673 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (101ms) I0420 07:44:48.768698 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (100ms) I0420 07:44:48.868601 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (100ms) I0420 07:44:48.968084 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (99ms) I0420 07:44:49.068843 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (101ms) I0420 07:44:49.168319 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (99ms) I0420 07:44:49.268391 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (100ms) I0420 07:44:49.368393 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0420 07:44:49.468471 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (100ms) I0420 07:44:49.568755 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (100ms) I0420 07:44:49.668190 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (99ms) I0420 07:44:49.768530 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (100ms) I0420 07:44:49.868298 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0420 07:44:49.967797 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (99ms) I0420 07:44:50.068130 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (100ms) I0420 07:44:50.168465 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (100ms) I0420 07:44:50.268782 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (100ms) I0420 07:44:50.370011 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (101ms) I0420 07:44:50.469489 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (99ms) I0420 07:44:50.469520 1 tasks_processing.go:74] worker 1 stopped. E0420 07:44:50.469530 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0420 07:44:50.469765 1 recorder.go:75] Recording config/workload_info with fingerprint=668c67fab22e6b3f697ade8dc238976bdd6fc85e3107f25e11662c82a527547c I0420 07:44:50.469779 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.235100167s to process 1 records E0420 07:44:50.469822 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.235s with: function \"workload_info\" failed with an error" I0420 07:44:50.470923 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0420 07:44:50.470937 1 periodic.go:209] Running conditional gatherer I0420 07:44:50.476764 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0420 07:44:50.482876 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:36217->172.30.0.10:53: read: connection refused E0420 07:44:50.483103 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0420 07:44:50.483160 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0420 07:44:50.488178 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0420 07:44:50.488191 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488196 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488200 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488203 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488206 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488210 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488213 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488215 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0420 07:44:50.488218 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0420 07:44:50.488231 1 tasks_processing.go:45] number of workers: 3 I0420 07:44:50.488240 1 tasks_processing.go:69] worker 2 listening for tasks. I0420 07:44:50.488244 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0420 07:44:50.488252 1 tasks_processing.go:69] worker 0 listening for tasks. I0420 07:44:50.488260 1 tasks_processing.go:69] worker 1 listening for tasks. I0420 07:44:50.488266 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0420 07:44:50.488267 1 tasks_processing.go:74] worker 1 stopped. I0420 07:44:50.488283 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0420 07:44:50.488320 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0420 07:44:50.488333 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 628ns to process 1 records I0420 07:44:50.488363 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0420 07:44:50.488371 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.217µs to process 1 records I0420 07:44:50.488376 1 tasks_processing.go:74] worker 0 stopped. I0420 07:44:50.488550 1 tasks_processing.go:74] worker 2 stopped. I0420 07:44:50.488564 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 245.833µs to process 0 records I0420 07:44:50.488594 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.11:36217->172.30.0.10:53: read: connection refused I0420 07:44:50.488612 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0420 07:44:50.508696 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=789de89ddb9ca47f3e3f86061b04c677b79dc03b2f09200291f37151dd15d2f0 I0420 07:44:50.508844 1 diskrecorder.go:70] Writing 99 records to /var/lib/insights-operator/insights-2026-04-20-074450.tar.gz I0420 07:44:50.514429 1 diskrecorder.go:51] Wrote 99 records to disk in 5ms I0420 07:44:50.514459 1 periodic.go:278] Gathering cluster info every 2h0m0s I0420 07:44:50.514476 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0420 07:44:50.956899 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 07:44:51.160412 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 07:45:06.059296 1 configmapobserver.go:84] configmaps "insights-config" not found I0420 07:45:49.563094 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="7b12cf30c04877914d8a1c860e65864d01da73eca8c59230af2fdaf9c46693a5") W0420 07:45:49.563126 1 builder.go:160] Restart triggered because of file /var/run/configmaps/service-ca-bundle/service-ca.crt was created I0420 07:45:49.563186 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="3039a8d4ec170dd27fc004d1ebff137557ba1c3a1afd793152b275bc14c9b03b") I0420 07:45:49.563202 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0420 07:45:49.563234 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="e83d18dd3e997126550b205fd717fa34a29326fb728c96a650d03bea628b3b38") I0420 07:45:49.563251 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0420 07:45:49.563252 1 periodic.go:170] Shutting down I0420 07:45:49.563235 1 base_controller.go:181] Shutting down LoggingSyncer ... I0420 07:45:49.563305 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0420 07:45:49.563310 1 base_controller.go:123] Shutting down worker of ConfigController controller ... E0420 07:45:49.563310 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled I0420 07:45:49.563305 1 base_controller.go:181] Shutting down ConfigController ... I0420 07:45:49.563332 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector E0420 07:45:49.563347 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled