W0419 18:50:44.112192 1 cmd.go:257] Using insecure, self-signed certificates I0419 18:50:44.848158 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:50:44.848497 1 observer_polling.go:159] Starting file observer I0419 18:50:45.798576 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0419 18:50:45.798758 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0419 18:50:45.799310 1 secure_serving.go:57] Forcing use of http/1.1 only W0419 18:50:45.799328 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0419 18:50:45.799333 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0419 18:50:45.799337 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0419 18:50:45.799340 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0419 18:50:45.799342 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0419 18:50:45.799344 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0419 18:50:45.799364 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0419 18:50:45.802732 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0419 18:50:45.802758 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"d63efa13-c947-4d47-bc7a-227a847b2e0b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0419 18:50:45.804238 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0419 18:50:45.804258 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0419 18:50:45.804253 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0419 18:50:45.804268 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0419 18:50:45.804255 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0419 18:50:45.804286 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0419 18:50:45.804578 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-4180667758/tls.crt::/tmp/serving-cert-4180667758/tls.key" I0419 18:50:45.804946 1 secure_serving.go:213] Serving securely on [::]:8443 I0419 18:50:45.805022 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0419 18:50:45.807514 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0419 18:50:45.807570 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0419 18:50:45.807636 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0419 18:50:45.812616 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0419 18:50:45.812643 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0419 18:50:45.817191 1 secretconfigobserver.go:119] support secret does not exist I0419 18:50:45.822128 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0419 18:50:45.826518 1 secretconfigobserver.go:119] support secret does not exist I0419 18:50:45.830146 1 recorder.go:161] Pruning old reports every 8h16m55s, max age is 288h0m0s I0419 18:50:45.841275 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0419 18:50:45.841257 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0419 18:50:45.841287 1 insightsreport.go:296] Starting report retriever I0419 18:50:45.841295 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0419 18:50:45.841296 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0419 18:50:45.841319 1 periodic.go:209] Running clusterconfig gatherer I0419 18:50:45.841382 1 tasks_processing.go:45] number of workers: 64 I0419 18:50:45.841416 1 tasks_processing.go:69] worker 3 listening for tasks. I0419 18:50:45.841426 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 18:50:45.841431 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 18:50:45.841437 1 tasks_processing.go:69] worker 2 listening for tasks. I0419 18:50:45.841437 1 tasks_processing.go:71] worker 1 working on image_pruners task. I0419 18:50:45.841440 1 tasks_processing.go:71] worker 2 working on tsdb_status task. I0419 18:50:45.841446 1 tasks_processing.go:69] worker 9 listening for tasks. I0419 18:50:45.841448 1 tasks_processing.go:69] worker 10 listening for tasks. I0419 18:50:45.841463 1 tasks_processing.go:69] worker 7 listening for tasks. I0419 18:50:45.841461 1 tasks_processing.go:69] worker 6 listening for tasks. I0419 18:50:45.841474 1 tasks_processing.go:69] worker 4 listening for tasks. I0419 18:50:45.841480 1 tasks_processing.go:69] worker 8 listening for tasks. W0419 18:50:45.841479 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:50:45.841484 1 tasks_processing.go:69] worker 5 listening for tasks. I0419 18:50:45.841489 1 tasks_processing.go:69] worker 12 listening for tasks. I0419 18:50:45.841490 1 tasks_processing.go:69] worker 11 listening for tasks. I0419 18:50:45.841494 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 43.06µs to process 0 records I0419 18:50:45.841492 1 tasks_processing.go:71] worker 5 working on oauths task. I0419 18:50:45.841501 1 tasks_processing.go:71] worker 11 working on container_runtime_configs task. I0419 18:50:45.841515 1 tasks_processing.go:69] worker 17 listening for tasks. I0419 18:50:45.841521 1 tasks_processing.go:71] worker 10 working on cost_management_metrics_configs task. I0419 18:50:45.841514 1 tasks_processing.go:69] worker 14 listening for tasks. I0419 18:50:45.841531 1 tasks_processing.go:69] worker 42 listening for tasks. I0419 18:50:45.841534 1 tasks_processing.go:71] worker 14 working on sap_pods task. I0419 18:50:45.841533 1 tasks_processing.go:69] worker 18 listening for tasks. I0419 18:50:45.841489 1 tasks_processing.go:69] worker 13 listening for tasks. I0419 18:50:45.841536 1 tasks_processing.go:69] worker 34 listening for tasks. I0419 18:50:45.841549 1 tasks_processing.go:69] worker 20 listening for tasks. I0419 18:50:45.841552 1 tasks_processing.go:69] worker 48 listening for tasks. I0419 18:50:45.841505 1 tasks_processing.go:69] worker 15 listening for tasks. I0419 18:50:45.841560 1 tasks_processing.go:69] worker 21 listening for tasks. I0419 18:50:45.841561 1 tasks_processing.go:69] worker 38 listening for tasks. I0419 18:50:45.841567 1 tasks_processing.go:69] worker 57 listening for tasks. I0419 18:50:45.841569 1 tasks_processing.go:69] worker 30 listening for tasks. I0419 18:50:45.841574 1 tasks_processing.go:69] worker 39 listening for tasks. I0419 18:50:45.841571 1 tasks_processing.go:69] worker 43 listening for tasks. I0419 18:50:45.841578 1 tasks_processing.go:69] worker 58 listening for tasks. I0419 18:50:45.841558 1 tasks_processing.go:69] worker 54 listening for tasks. I0419 18:50:45.841585 1 tasks_processing.go:69] worker 51 listening for tasks. I0419 18:50:45.841589 1 tasks_processing.go:69] worker 44 listening for tasks. I0419 18:50:45.841560 1 tasks_processing.go:69] worker 29 listening for tasks. I0419 18:50:45.841595 1 tasks_processing.go:69] worker 45 listening for tasks. I0419 18:50:45.841597 1 tasks_processing.go:69] worker 33 listening for tasks. I0419 18:50:45.841599 1 tasks_processing.go:69] worker 55 listening for tasks. I0419 18:50:45.841603 1 tasks_processing.go:71] worker 8 working on clusterroles task. I0419 18:50:45.841609 1 tasks_processing.go:71] worker 2 working on openshift_machine_api_events task. I0419 18:50:45.841610 1 tasks_processing.go:69] worker 27 listening for tasks. I0419 18:50:45.841610 1 tasks_processing.go:71] worker 0 working on image task. I0419 18:50:45.841615 1 tasks_processing.go:69] worker 60 listening for tasks. I0419 18:50:45.841613 1 tasks_processing.go:69] worker 61 listening for tasks. I0419 18:50:45.841620 1 tasks_processing.go:69] worker 63 listening for tasks. I0419 18:50:45.841495 1 tasks_processing.go:71] worker 9 working on machines task. I0419 18:50:45.841650 1 tasks_processing.go:71] worker 7 working on validating_webhook_configurations task. I0419 18:50:45.841720 1 tasks_processing.go:71] worker 39 working on schedulers task. I0419 18:50:45.841750 1 tasks_processing.go:71] worker 51 working on nodenetworkstates task. I0419 18:50:45.841771 1 tasks_processing.go:71] worker 48 working on pdbs task. I0419 18:50:45.841815 1 tasks_processing.go:71] worker 38 working on aggregated_monitoring_cr_names task. I0419 18:50:45.841887 1 tasks_processing.go:71] worker 33 working on service_accounts task. I0419 18:50:45.841947 1 tasks_processing.go:71] worker 30 working on pod_network_connectivity_checks task. I0419 18:50:45.841967 1 tasks_processing.go:71] worker 20 working on olm_operators task. I0419 18:50:45.841496 1 tasks_processing.go:71] worker 12 working on active_alerts task. W0419 18:50:45.842013 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:50:45.842027 1 tasks_processing.go:71] worker 12 working on monitoring_persistent_volumes task. I0419 18:50:45.842059 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 31.509µs to process 0 records I0419 18:50:45.841866 1 tasks_processing.go:71] worker 15 working on openstack_dataplanedeployments task. I0419 18:50:45.841608 1 tasks_processing.go:71] worker 3 working on image_registries task. I0419 18:50:45.841595 1 tasks_processing.go:69] worker 25 listening for tasks. I0419 18:50:45.842191 1 tasks_processing.go:71] worker 25 working on machine_healthchecks task. I0419 18:50:45.841947 1 tasks_processing.go:71] worker 13 working on node_logs task. I0419 18:50:45.841523 1 tasks_processing.go:69] worker 53 listening for tasks. I0419 18:50:45.841542 1 tasks_processing.go:69] worker 19 listening for tasks. I0419 18:50:45.841505 1 tasks_processing.go:69] worker 28 listening for tasks. I0419 18:50:45.841545 1 tasks_processing.go:69] worker 36 listening for tasks. I0419 18:50:45.841545 1 tasks_processing.go:69] worker 59 listening for tasks. I0419 18:50:45.841552 1 tasks_processing.go:69] worker 37 listening for tasks. I0419 18:50:45.841525 1 tasks_processing.go:71] worker 17 working on openstack_version task. I0419 18:50:45.842282 1 tasks_processing.go:71] worker 37 working on ingress_certificates task. I0419 18:50:45.842355 1 tasks_processing.go:71] worker 28 working on infrastructures task. I0419 18:50:45.842389 1 tasks_processing.go:71] worker 53 working on jaegers task. I0419 18:50:45.842415 1 tasks_processing.go:71] worker 36 working on feature_gates task. I0419 18:50:45.842437 1 tasks_processing.go:71] worker 19 working on openstack_dataplanenodesets task. I0419 18:50:45.841559 1 tasks_processing.go:69] worker 47 listening for tasks. I0419 18:50:45.842448 1 tasks_processing.go:71] worker 59 working on ingress task. I0419 18:50:45.842453 1 tasks_processing.go:71] worker 47 working on openshift_logging task. I0419 18:50:45.841568 1 tasks_processing.go:69] worker 49 listening for tasks. I0419 18:50:45.841569 1 tasks_processing.go:69] worker 22 listening for tasks. I0419 18:50:45.841576 1 tasks_processing.go:69] worker 50 listening for tasks. I0419 18:50:45.841578 1 tasks_processing.go:69] worker 23 listening for tasks. I0419 18:50:45.841581 1 tasks_processing.go:69] worker 31 listening for tasks. I0419 18:50:45.841583 1 tasks_processing.go:69] worker 40 listening for tasks. I0419 18:50:45.841585 1 tasks_processing.go:69] worker 24 listening for tasks. I0419 18:50:45.841578 1 tasks_processing.go:69] worker 56 listening for tasks. I0419 18:50:45.841588 1 tasks_processing.go:69] worker 32 listening for tasks. I0419 18:50:45.841519 1 tasks_processing.go:69] worker 41 listening for tasks. I0419 18:50:45.841594 1 tasks_processing.go:71] worker 4 working on ceph_cluster task. I0419 18:50:45.842594 1 tasks_processing.go:71] worker 41 working on lokistack task. I0419 18:50:45.841600 1 tasks_processing.go:69] worker 52 listening for tasks. I0419 18:50:45.842633 1 tasks_processing.go:71] worker 49 working on certificate_signing_requests task. I0419 18:50:45.842640 1 tasks_processing.go:71] worker 52 working on networks task. I0419 18:50:45.842651 1 tasks_processing.go:71] worker 24 working on nodes task. I0419 18:50:45.842659 1 tasks_processing.go:71] worker 50 working on support_secret task. I0419 18:50:45.842671 1 tasks_processing.go:71] worker 56 working on machine_autoscalers task. I0419 18:50:45.842691 1 tasks_processing.go:71] worker 40 working on openstack_controlplanes task. I0419 18:50:45.841603 1 tasks_processing.go:69] worker 26 listening for tasks. I0419 18:50:45.841601 1 tasks_processing.go:71] worker 6 working on qemu_kubevirt_launcher_logs task. I0419 18:50:45.842895 1 tasks_processing.go:71] worker 22 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0419 18:50:45.842952 1 tasks_processing.go:71] worker 32 working on silenced_alerts task. W0419 18:50:45.842979 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:50:45.842994 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 26.881µs to process 0 records I0419 18:50:45.843000 1 tasks_processing.go:71] worker 23 working on mutating_webhook_configurations task. I0419 18:50:45.843146 1 tasks_processing.go:71] worker 32 working on machine_sets task. I0419 18:50:45.841602 1 tasks_processing.go:69] worker 46 listening for tasks. I0419 18:50:45.843172 1 tasks_processing.go:71] worker 46 working on machine_config_pools task. I0419 18:50:45.841436 1 tasks_processing.go:69] worker 16 listening for tasks. I0419 18:50:45.841684 1 tasks_processing.go:69] worker 62 listening for tasks. I0419 18:50:45.841733 1 tasks_processing.go:71] worker 29 working on machine_configs task. I0419 18:50:45.843227 1 tasks_processing.go:71] worker 26 working on authentication task. I0419 18:50:45.841737 1 tasks_processing.go:71] worker 43 working on cluster_apiserver task. I0419 18:50:45.843770 1 tasks_processing.go:71] worker 16 working on version task. I0419 18:50:45.843947 1 tasks_processing.go:74] worker 62 stopped. I0419 18:50:45.841742 1 tasks_processing.go:71] worker 58 working on operators_pods_and_events task. I0419 18:50:45.841747 1 tasks_processing.go:71] worker 54 working on container_images task. I0419 18:50:45.841878 1 tasks_processing.go:71] worker 55 working on crds task. I0419 18:50:45.841883 1 tasks_processing.go:71] worker 45 working on metrics task. I0419 18:50:45.841905 1 tasks_processing.go:71] worker 63 working on proxies task. W0419 18:50:45.844431 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:50:45.844443 1 tasks_processing.go:74] worker 45 stopped. I0419 18:50:45.844454 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 104.694µs to process 0 records I0419 18:50:45.841911 1 tasks_processing.go:71] worker 27 working on config_maps task. I0419 18:50:45.841916 1 tasks_processing.go:71] worker 60 working on dvo_metrics task. I0419 18:50:45.841919 1 tasks_processing.go:71] worker 61 working on storage_cluster task. I0419 18:50:45.841926 1 tasks_processing.go:71] worker 21 working on storage_classes task. I0419 18:50:45.841932 1 tasks_processing.go:71] worker 44 working on operators task. I0419 18:50:45.841941 1 tasks_processing.go:71] worker 57 working on nodenetworkconfigurationpolicies task. I0419 18:50:45.841952 1 tasks_processing.go:71] worker 42 working on install_plans task. I0419 18:50:45.841955 1 tasks_processing.go:71] worker 18 working on overlapping_namespace_uids task. I0419 18:50:45.841962 1 tasks_processing.go:71] worker 34 working on sap_config task. I0419 18:50:45.842627 1 tasks_processing.go:71] worker 31 working on sap_datahubs task. I0419 18:50:45.841536 1 tasks_processing.go:69] worker 35 listening for tasks. I0419 18:50:45.845233 1 tasks_processing.go:74] worker 35 stopped. I0419 18:50:45.846140 1 tasks_processing.go:74] worker 11 stopped. I0419 18:50:45.846154 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 4.62565ms to process 0 records I0419 18:50:45.846363 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0419 18:50:45.846386 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0419 18:50:45.846395 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0419 18:50:45.846400 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0419 18:50:45.846419 1 controller.go:489] The operator is still being initialized I0419 18:50:45.846428 1 controller.go:512] The operator is healthy I0419 18:50:45.847222 1 tasks_processing.go:74] worker 5 stopped. I0419 18:50:45.847450 1 recorder.go:75] Recording config/oauth with fingerprint=3670aadea5823e61abbd2a9083be3abbdeb4f5b982a2b09b11de77cc6591f509 I0419 18:50:45.847463 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 5.707906ms to process 1 records I0419 18:50:45.854456 1 tasks_processing.go:74] worker 15 stopped. I0419 18:50:45.854466 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 12.302575ms to process 0 records I0419 18:50:45.854481 1 tasks_processing.go:74] worker 14 stopped. I0419 18:50:45.854493 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 12.938804ms to process 0 records I0419 18:50:45.854499 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 12.733993ms to process 0 records I0419 18:50:45.854505 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 11.890112ms to process 0 records I0419 18:50:45.854507 1 tasks_processing.go:74] worker 51 stopped. I0419 18:50:45.854510 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 12.976713ms to process 0 records I0419 18:50:45.854513 1 tasks_processing.go:74] worker 10 stopped. I0419 18:50:45.854518 1 tasks_processing.go:74] worker 41 stopped. I0419 18:50:45.854523 1 tasks_processing.go:74] worker 30 stopped. E0419 18:50:45.854536 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0419 18:50:45.854550 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 12.546663ms to process 0 records I0419 18:50:45.854590 1 tasks_processing.go:74] worker 25 stopped. E0419 18:50:45.854598 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0419 18:50:45.854605 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 12.385436ms to process 0 records E0419 18:50:45.854611 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0419 18:50:45.854616 1 gather.go:177] gatherer "clusterconfig" function "machines" took 12.966274ms to process 0 records I0419 18:50:45.854621 1 tasks_processing.go:74] worker 9 stopped. I0419 18:50:45.860229 1 tasks_processing.go:74] worker 32 stopped. I0419 18:50:45.860240 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 17.063633ms to process 0 records I0419 18:50:45.860247 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 17.739454ms to process 0 records I0419 18:50:45.860253 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 17.532573ms to process 0 records I0419 18:50:45.860258 1 tasks_processing.go:74] worker 4 stopped. I0419 18:50:45.860260 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 17.846492ms to process 0 records I0419 18:50:45.860267 1 tasks_processing.go:74] worker 40 stopped. I0419 18:50:45.860264 1 tasks_processing.go:74] worker 53 stopped. I0419 18:50:45.860343 1 tasks_processing.go:74] worker 19 stopped. I0419 18:50:45.860352 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 17.897455ms to process 0 records I0419 18:50:45.860380 1 gather_logs.go:145] no pods in namespace were found I0419 18:50:45.860391 1 tasks_processing.go:74] worker 6 stopped. I0419 18:50:45.860395 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 17.597249ms to process 0 records I0419 18:50:45.860471 1 tasks_processing.go:74] worker 1 stopped. I0419 18:50:45.860670 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=8cdfa7611d7e5d689981343b4884a6487f9e999105ecdefef7128b2ce5ea01ee I0419 18:50:45.860681 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 19.020149ms to process 1 records I0419 18:50:45.862543 1 tasks_processing.go:74] worker 17 stopped. I0419 18:50:45.862555 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 20.265946ms to process 0 records I0419 18:50:45.862571 1 tasks_processing.go:74] worker 47 stopped. I0419 18:50:45.862575 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 20.112093ms to process 0 records I0419 18:50:45.862589 1 tasks_processing.go:74] worker 56 stopped. I0419 18:50:45.862595 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 19.910245ms to process 0 records I0419 18:50:45.870051 1 tasks_processing.go:74] worker 3 stopped. I0419 18:50:45.870348 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=83919dfdb3b3b41d52ab9c3e59060da1cd428a91c6d208b03e8443d80445ab4f I0419 18:50:45.870361 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 27.86092ms to process 1 records I0419 18:50:45.877547 1 tasks_processing.go:74] worker 34 stopped. I0419 18:50:45.877559 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 32.4083ms to process 0 records I0419 18:50:45.877574 1 tasks_processing.go:74] worker 0 stopped. I0419 18:50:45.877663 1 recorder.go:75] Recording config/image with fingerprint=b14dc6e2b2ac153c985cd472e418e0cbfc1f4e0e4af7a4036158c430bbe109e9 I0419 18:50:45.877676 1 gather.go:177] gatherer "clusterconfig" function "image" took 35.953851ms to process 1 records I0419 18:50:45.877685 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 32.507876ms to process 0 records I0419 18:50:45.877694 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 35.564573ms to process 0 records I0419 18:50:45.877700 1 tasks_processing.go:74] worker 12 stopped. I0419 18:50:45.877705 1 tasks_processing.go:74] worker 57 stopped. I0419 18:50:45.877726 1 tasks_processing.go:74] worker 31 stopped. I0419 18:50:45.877739 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 32.583984ms to process 0 records I0419 18:50:45.877873 1 tasks_processing.go:74] worker 59 stopped. I0419 18:50:45.878017 1 recorder.go:75] Recording config/ingress with fingerprint=4ad3fef1448abe2724888aecab1ee549a074187dd9306feeb8063d1d33ceb099 I0419 18:50:45.878030 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 35.41544ms to process 1 records I0419 18:50:45.878122 1 tasks_processing.go:74] worker 26 stopped. I0419 18:50:45.878186 1 recorder.go:75] Recording config/authentication with fingerprint=79ca2d017df150739a7cf3372043bfd3eca3fa86aae27b27498806029487a3d7 I0419 18:50:45.878195 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 34.700055ms to process 1 records I0419 18:50:45.878279 1 tasks_processing.go:74] worker 24 stopped. I0419 18:50:45.878437 1 recorder.go:75] Recording config/node/ip-10-0-0-19.ec2.internal with fingerprint=8344fa75a2db1243e624ceccd9d71579559a64d9da60697f59770a47cca378a3 I0419 18:50:45.878493 1 recorder.go:75] Recording config/node/ip-10-0-1-14.ec2.internal with fingerprint=a61f128d67473093674f8da36f5718a9b62fe91fb654252da3b002fe1973a051 I0419 18:50:45.878543 1 recorder.go:75] Recording config/node/ip-10-0-2-40.ec2.internal with fingerprint=3bb3b3cfeae9328e19d3dfdbd0fee5aa951c379d0db404b60a5adbfd6435cee8 I0419 18:50:45.878554 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 35.459311ms to process 3 records I0419 18:50:45.880488 1 tasks_processing.go:74] worker 61 stopped. I0419 18:50:45.880502 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 35.71352ms to process 0 records I0419 18:50:45.880672 1 tasks_processing.go:74] worker 48 stopped. I0419 18:50:45.880769 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=8d98aa50effc3c26102b7a6ce2e93e62934a8c8f0a97849d911612728a4c0022 I0419 18:50:45.880789 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=0ab7eb8ce7ca0f48b86b442393e01572b50ba7ea0c7b7c96fd34415ee586f264 I0419 18:50:45.880804 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=3948ef045b0534b4bf8d3f75e01e4be046079149d8a25b00594163644c9e6dca I0419 18:50:45.880810 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 38.884662ms to process 3 records I0419 18:50:45.884156 1 tasks_processing.go:74] worker 2 stopped. I0419 18:50:45.884179 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 42.536281ms to process 0 records E0419 18:50:45.884191 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0419 18:50:45.884208 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 41.48431ms to process 0 records I0419 18:50:45.884217 1 tasks_processing.go:74] worker 50 stopped. I0419 18:50:45.884370 1 tasks_processing.go:74] worker 36 stopped. I0419 18:50:45.884626 1 recorder.go:75] Recording config/featuregate with fingerprint=cdb019baa62b33eff0206687d98ccab9f4ba6dd8d170f685860292c7257705df I0419 18:50:45.884643 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 41.940217ms to process 1 records I0419 18:50:45.884778 1 tasks_processing.go:74] worker 28 stopped. I0419 18:50:45.887024 1 recorder.go:75] Recording config/infrastructure with fingerprint=48524ce0ddd3ad553d45f385c68c21eda7d7c05c0a7a7736076cef3e157b13e1 I0419 18:50:45.887045 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 42.007873ms to process 1 records I0419 18:50:45.887143 1 tasks_processing.go:74] worker 52 stopped. I0419 18:50:45.887363 1 recorder.go:75] Recording config/network with fingerprint=d8b81c0300fc4ae58d951e07668eec4af07c3223b639d2abc1025496026d9d2a I0419 18:50:45.887381 1 gather.go:177] gatherer "clusterconfig" function "networks" took 41.994341ms to process 1 records I0419 18:50:45.887509 1 tasks_processing.go:74] worker 39 stopped. I0419 18:50:45.887514 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=1907995b4d2d04e76ae49fe507e209bcacc000950716fe4c74f1471bd96e52cf I0419 18:50:45.887539 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 44.338151ms to process 1 records I0419 18:50:45.887548 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 44.67918ms to process 0 records I0419 18:50:45.887590 1 tasks_processing.go:74] worker 38 stopped. I0419 18:50:45.887608 1 tasks_processing.go:74] worker 7 stopped. I0419 18:50:45.888236 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=9593ce1cd55522470ee9c932e3113e3e5b79c268745e473bc1fce38b1278abef I0419 18:50:45.888337 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=1cf9b47e3cf0dd059be53660767763987030db30d9ea05678b68525d86d0564d I0419 18:50:45.888372 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=de75329ccb8bc2ce91aede26f72c44d95613ccbe13f09308d3e4addee7614429 I0419 18:50:45.888440 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=fc5e9a0ebc0bbc3633fa1714d1236f68b5197bfb0559f3a4b0642d11165c56a1 I0419 18:50:45.888489 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=9dbe94af555e2e7ac23dcd321432a388f2c2c681b40f82b5ca3e0309edc19742 I0419 18:50:45.888521 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=ed694cec5d4c330450a71ab9d9d2df0b11fd1bab92a17a6c64339dbc3d1a3674 I0419 18:50:45.888557 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=f1716c3b80eeebfbb05c3cc623d87905c5ccc0aed31ae27afd089944316426ab I0419 18:50:45.888591 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=f383a91ebebcab5219206ec0fbe165ae7644668df4f8cc6d51c4043ae472dfb3 I0419 18:50:45.888613 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=3dbf834eb0614802500e10ac22733d3213e2ec2f2a4d7dc650748fbc15210fec I0419 18:50:45.888636 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=3e49916fc2358e36d334105e8c36ff4fac47121ebca9d9d84b3d9e26fdcc9967 I0419 18:50:45.888675 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=41386624adf1e17b846cbb185ea029bb737f0105a70389ee67e3b6a2cb8d36e8 I0419 18:50:45.888686 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 44.937844ms to process 11 records I0419 18:50:45.888773 1 recorder.go:75] Recording config/olm_operators with fingerprint=30a3292ba291ed07c468489c8274693b73f9e4bb4b85f27ec6c0fbfcf378ae2d I0419 18:50:45.888783 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 45.077621ms to process 1 records I0419 18:50:45.888773 1 tasks_processing.go:74] worker 20 stopped. I0419 18:50:45.889312 1 tasks_processing.go:74] worker 49 stopped. I0419 18:50:45.889326 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 46.662551ms to process 0 records W0419 18:50:45.890708 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 18:50:45.892290 1 tasks_processing.go:74] worker 54 stopped. I0419 18:50:45.893496 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-575cd97545-fjfzj with fingerprint=5cde18d5527950578471a8e5e2ff518b0dcbfa9cb9d7d507f83816727bd1761b I0419 18:50:45.893555 1 recorder.go:75] Recording config/running_containers with fingerprint=b366a27d340c8058534da043881e33d940c090d954c588e7beb3de09a2c15527 I0419 18:50:45.893565 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 48.153527ms to process 2 records I0419 18:50:45.893680 1 tasks_processing.go:74] worker 63 stopped. I0419 18:50:45.893731 1 recorder.go:75] Recording config/proxy with fingerprint=0ff79c9ed77e3fdf9095dbfecc4f668e04262779eec61f1e287bd861b13e4a58 I0419 18:50:45.893742 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 49.256377ms to process 1 records I0419 18:50:45.893776 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0419 18:50:45.893830 1 tasks_processing.go:74] worker 8 stopped. I0419 18:50:45.893865 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0419 18:50:45.893899 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=8464911ba5b4dcf477c8eee79fe907ca5c5f87f12fbdd372a90d2c9afe605ae7 W0419 18:50:45.893940 1 operator.go:288] started I0419 18:50:45.893960 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0419 18:50:45.893980 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=b2b53502f0b8599bf81c5c7fb0ecd680d2e9484bc029f03bf61e9a538b733405 I0419 18:50:45.893985 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 52.10128ms to process 2 records I0419 18:50:45.894076 1 tasks_processing.go:74] worker 43 stopped. I0419 18:50:45.894085 1 recorder.go:75] Recording config/apiserver with fingerprint=f99366f1bda14911ff0b0fb15279d83e1ed7a5c8112e86189604224a2c4ddce8 I0419 18:50:45.894091 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 50.131732ms to process 1 records I0419 18:50:45.896808 1 tasks_processing.go:74] worker 21 stopped. I0419 18:50:45.897035 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=909b8c74b2c801d8cf68128dcf45b315e65722581448a8ddcfbe74ebdd57ccba I0419 18:50:45.897116 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=90ffb5ed68ece5a0a912d010d4b5b5c0264bab2f992432663aad01103d8d8ada I0419 18:50:45.897158 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 51.882402ms to process 2 records I0419 18:50:45.904050 1 tasks_processing.go:74] worker 23 stopped. I0419 18:50:45.904169 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=584bbb16e62331bbcd2d1911e72f43f8533b3dd0e7682831cca67c6c00c2c6df I0419 18:50:45.904227 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=07a7b50da78adb28e80a4848ab55b90f1e0ddd994654151b6b6ce599e2a9e1fa I0419 18:50:45.904259 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=abf67b97afe3bbae2a55855b3290cd606a54738ab10d26a0935d294cd33b9b76 I0419 18:50:45.904268 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 61.038943ms to process 3 records I0419 18:50:45.904285 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0419 18:50:45.904333 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0419 18:50:45.904338 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0419 18:50:45.904644 1 tasks_processing.go:74] worker 55 stopped. I0419 18:50:45.905288 1 controller.go:212] Source scaController *sca.Controller is not ready I0419 18:50:45.905299 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0419 18:50:45.905302 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0419 18:50:45.905305 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0419 18:50:45.905307 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0419 18:50:45.905320 1 controller.go:489] The operator is still being initialized I0419 18:50:45.905328 1 controller.go:512] The operator is healthy I0419 18:50:45.905355 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=7bceab3ffdec895cf5c5618d2ad07e6d113cac1be176d2d507619e8b462bb753 I0419 18:50:45.905703 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=a8c033679ff44980827e2f8b5e899f003b918219f4e1244f0de965c9397b4721 I0419 18:50:45.905718 1 gather.go:177] gatherer "clusterconfig" function "crds" took 60.402222ms to process 2 records I0419 18:50:45.905733 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 62.356238ms to process 0 records I0419 18:50:45.905747 1 tasks_processing.go:74] worker 22 stopped. I0419 18:50:45.907769 1 base_controller.go:82] Caches are synced for ConfigController I0419 18:50:45.907777 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0419 18:50:45.908199 1 prometheus_rules.go:88] Prometheus rules successfully created E0419 18:50:45.911289 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%277e3e2f7d-7dfb-4d63-9470-79c542d790f6%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.15:60023->172.30.0.10:53: read: connection refused I0419 18:50:45.911302 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%277e3e2f7d-7dfb-4d63-9470-79c542d790f6%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.15:60023->172.30.0.10:53: read: connection refused I0419 18:50:45.931378 1 tasks_processing.go:74] worker 18 stopped. E0419 18:50:45.931396 1 gather.go:140] gatherer "clusterconfig" function "overlapping_namespace_uids" failed with the error: can't read uid range of the openshift-service-ca namespace I0419 18:50:45.931420 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0419 18:50:45.931428 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 86.240847ms to process 1 records I0419 18:50:45.948355 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 18:50:45.961132 1 tasks_processing.go:74] worker 16 stopped. I0419 18:50:45.961377 1 recorder.go:75] Recording config/version with fingerprint=8b38b627998710ad9f3b03569e3c73d4cc5714e208a568cd2e9bb8a61552755b I0419 18:50:45.961388 1 recorder.go:75] Recording config/id with fingerprint=91571fe73ef47b5ddd1326412bfab43e63104eea2fd884e76d086dd97eeaf9f2 I0419 18:50:45.961394 1 gather.go:177] gatherer "clusterconfig" function "version" took 117.306916ms to process 2 records I0419 18:50:45.964915 1 tasks_processing.go:74] worker 37 stopped. E0419 18:50:45.964928 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0419 18:50:45.964934 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ppd202t29rie73gmjlbphesfe3hmvfj-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ppd202t29rie73gmjlbphesfe3hmvfj-primary-cert-bundle-secret" not found I0419 18:50:45.964981 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=48de2ac9d7ae71b4306641e2bf5b0f69fed31b59a82f28c649ab91ae0e271f21 I0419 18:50:45.964991 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 122.620187ms to process 1 records I0419 18:50:45.965409 1 tasks_processing.go:74] worker 27 stopped. E0419 18:50:45.965422 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0419 18:50:45.965427 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0419 18:50:45.965431 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0419 18:50:45.965439 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0419 18:50:45.965463 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0419 18:50:45.965470 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0419 18:50:45.965475 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0419 18:50:45.965478 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0419 18:50:45.965516 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0419 18:50:45.965523 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0419 18:50:45.965529 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 120.830081ms to process 7 records I0419 18:50:45.986120 1 tasks_processing.go:74] worker 13 stopped. I0419 18:50:45.986132 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 143.889568ms to process 0 records I0419 18:50:45.994783 1 base_controller.go:82] Caches are synced for LoggingSyncer I0419 18:50:45.994793 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0419 18:50:45.999304 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0419 18:50:46.002898 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.15:59341->172.30.0.10:53: read: connection refused I0419 18:50:46.002912 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.15:59341->172.30.0.10:53: read: connection refused I0419 18:50:46.009215 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0419 18:50:46.014094 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found W0419 18:50:46.890117 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 18:50:46.903875 1 tasks_processing.go:74] worker 46 stopped. I0419 18:50:46.903894 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 1.060688276s to process 0 records I0419 18:50:46.913680 1 tasks_processing.go:74] worker 29 stopped. I0419 18:50:46.913721 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0419 18:50:46.913739 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 1.070444709s to process 1 records I0419 18:50:47.106143 1 gather_cluster_operator_pods_and_events.go:121] Found 20 pods with 24 containers I0419 18:50:47.106158 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1048576 bytes I0419 18:50:47.106891 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-5stlr pod in namespace openshift-dns (previous: false). I0419 18:50:47.320782 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0419 18:50:47.332793 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-5stlr pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-5stlr\" is waiting to start: ContainerCreating" I0419 18:50:47.332807 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-5stlr\" is waiting to start: ContainerCreating" I0419 18:50:47.332815 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-5stlr pod in namespace openshift-dns (previous: false). I0419 18:50:47.511409 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-5stlr pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-5stlr\" is waiting to start: ContainerCreating" I0419 18:50:47.511424 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-5stlr\" is waiting to start: ContainerCreating" I0419 18:50:47.511454 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-8lg8j pod in namespace openshift-dns (previous: false). I0419 18:50:47.731193 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-8lg8j pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-8lg8j\" is waiting to start: ContainerCreating" I0419 18:50:47.731208 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-8lg8j\" is waiting to start: ContainerCreating" I0419 18:50:47.731216 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-8lg8j pod in namespace openshift-dns (previous: false). W0419 18:50:47.890297 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 18:50:47.913277 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-8lg8j pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-8lg8j\" is waiting to start: ContainerCreating" I0419 18:50:47.913294 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-8lg8j\" is waiting to start: ContainerCreating" I0419 18:50:47.913325 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-rfcxb pod in namespace openshift-dns (previous: false). I0419 18:50:47.921826 1 tasks_processing.go:74] worker 44 stopped. I0419 18:50:47.921895 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=4d3f68c7b396d306340a2b41c8179a9e75216b8817cb430433f0d1864cb14177 I0419 18:50:47.921926 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=3fca161302491c8d279475ca9e3ac1e691b03cd2697a369c7ed692864c647e0a I0419 18:50:47.921957 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0419 18:50:47.921984 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=31a83e4e2cd371deaadc68bae26265907160d8cfb9b31e2b2eb56a231e33983b I0419 18:50:47.922019 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0419 18:50:47.922045 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=51c0bb1bdcba29d302787f92e9c6246068200086605a08efc24c26bf06328843 I0419 18:50:47.922078 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=d7228bcbd6716ec0f4fa83254576d8a6bcd1651cde705c66e8a594bab1c46c96 I0419 18:50:47.922104 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=c591fa4cff18f9eff099d57920ccf6897f2ab3aed66427e80a5375475faad0e4 I0419 18:50:47.922119 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=cf3a5fb069de0fe0179e0ba0b022825c69249a9a887ef7922aa1c44fdc699451 I0419 18:50:47.922139 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=a2177e22228c1b110537aa45204b556779cf3f26fea5d14b766046d13234c25e I0419 18:50:47.922149 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0419 18:50:47.922165 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=62714030829d5c0bc09dcfbdd0f3970fd4d3d2bebe666df062ebd9eae12f8b90 I0419 18:50:47.922176 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0419 18:50:47.922192 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=6fe8840495b31946a6f4a2b26683a671603d38b3a4bf686bec5b4db485d41f16 I0419 18:50:47.922201 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0419 18:50:47.922221 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=366ec0bf656be847958140709f8eb82228e8394abcd9f14f322768c413fca87e I0419 18:50:47.922242 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0419 18:50:47.922261 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=edf95f9c2b80b94ece8416caf76d64c2fa88e480de52e1962850b0eecc6cf26a I0419 18:50:47.922377 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=1f0a4b3b937ee1df05effaf1bbd3e667516808648916db58ad531da48b13f7c0 I0419 18:50:47.922387 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0419 18:50:47.922394 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0419 18:50:47.922417 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0419 18:50:47.922439 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=8cefb4f28c4f1b47dde0b2a8f5271830dc79f1775d2d435a0f025aafe01f4458 I0419 18:50:47.922462 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=4bae75be976929f0a39d083532df2ffe5f016b68349493147d6b1e4dbecca80b I0419 18:50:47.922473 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0419 18:50:47.922496 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=6a613f0319f80410931102216cc91e8b518b730085f860a480a91802c6e75acc I0419 18:50:47.922505 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0419 18:50:47.922529 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=bf27adabcc6084cf6ec00c1dc669d3a3e006ba95e8105650bcf8f03d660864f9 I0419 18:50:47.922543 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=7f37b10cc8bf791832f945e7f06f87c865379b7a877c4d720c16fd418631e6ef I0419 18:50:47.922557 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=cff2847080a7f32fcbe064e1e30050696b14b5ab3a2d8d82ba053b4166ba2fdf I0419 18:50:47.922572 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=46b0278e62b09604682c6f45e6209727ff61a66a7a52994c1ea8a788aa739e1b I0419 18:50:47.922595 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=ddb8f9a09ce94405ac039055514cac2f6d2b813eae78d723f1c18c290f4d01e9 I0419 18:50:47.922604 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0419 18:50:47.922627 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=37a39721205b25acaaa7ce83ab2cbfa4bea8bc206f24e8db0f608eb8663e1b31 I0419 18:50:47.922644 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0419 18:50:47.922653 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0419 18:50:47.922660 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.076793373s to process 36 records I0419 18:50:48.130303 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rfcxb pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-rfcxb\" is waiting to start: ContainerCreating" I0419 18:50:48.130319 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-rfcxb\" is waiting to start: ContainerCreating" I0419 18:50:48.130328 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-rfcxb pod in namespace openshift-dns (previous: false). I0419 18:50:48.310676 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rfcxb pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-rfcxb\" is waiting to start: ContainerCreating" I0419 18:50:48.310696 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-rfcxb\" is waiting to start: ContainerCreating" I0419 18:50:48.310707 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-24vtq pod in namespace openshift-dns (previous: false). I0419 18:50:48.511291 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:48.511307 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-fvvgw pod in namespace openshift-dns (previous: false). I0419 18:50:48.710893 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:48.710908 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-xfx4p pod in namespace openshift-dns (previous: false). W0419 18:50:48.889830 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 18:50:48.910876 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:48.910937 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-55f9d5c65d-bmpqm pod in namespace openshift-image-registry (previous: false). I0419 18:50:49.112847 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-55f9d5c65d-bmpqm pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-55f9d5c65d-bmpqm\" is waiting to start: ContainerCreating" I0419 18:50:49.112874 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-55f9d5c65d-bmpqm\" is waiting to start: ContainerCreating" I0419 18:50:49.112919 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-55f9d5c65d-nknwk pod in namespace openshift-image-registry (previous: false). I0419 18:50:49.311802 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-55f9d5c65d-nknwk pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-55f9d5c65d-nknwk\" is waiting to start: ContainerCreating" I0419 18:50:49.311817 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-55f9d5c65d-nknwk\" is waiting to start: ContainerCreating" I0419 18:50:49.311846 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7f485cb77-czkt4 pod in namespace openshift-image-registry (previous: false). I0419 18:50:49.511593 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7f485cb77-czkt4 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7f485cb77-czkt4\" is waiting to start: ContainerCreating" I0419 18:50:49.511608 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7f485cb77-czkt4\" is waiting to start: ContainerCreating" I0419 18:50:49.511617 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-9dt82 pod in namespace openshift-image-registry (previous: false). I0419 18:50:49.713232 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:49.713247 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-bp6mc pod in namespace openshift-image-registry (previous: false). W0419 18:50:49.889914 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 18:50:49.910451 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:49.910466 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-xcd7s pod in namespace openshift-image-registry (previous: false). I0419 18:50:50.113156 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 18:50:50.113172 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5b96bdc5bc-q9z5s pod in namespace openshift-ingress (previous: false). I0419 18:50:50.312169 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5b96bdc5bc-q9z5s pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5b96bdc5bc-q9z5s\" is waiting to start: ContainerCreating" I0419 18:50:50.312187 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5b96bdc5bc-q9z5s\" is waiting to start: ContainerCreating" I0419 18:50:50.312198 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6bc5d9d4ff-22bfg pod in namespace openshift-ingress (previous: false). I0419 18:50:50.534427 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6bc5d9d4ff-22bfg pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6bc5d9d4ff-22bfg\" is waiting to start: ContainerCreating" I0419 18:50:50.534441 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6bc5d9d4ff-22bfg\" is waiting to start: ContainerCreating" I0419 18:50:50.534450 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6bc5d9d4ff-tmlgt pod in namespace openshift-ingress (previous: false). I0419 18:50:50.726967 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6bc5d9d4ff-tmlgt pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6bc5d9d4ff-tmlgt\" is waiting to start: ContainerCreating" I0419 18:50:50.726980 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6bc5d9d4ff-tmlgt\" is waiting to start: ContainerCreating" I0419 18:50:50.727006 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-5d8tg pod in namespace openshift-ingress-canary (previous: false). W0419 18:50:50.886034 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0419 18:50:50.886055 1 tasks_processing.go:74] worker 60 stopped. E0419 18:50:50.886074 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0419 18:50:50.886092 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0419 18:50:50.886108 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0419 18:50:50.886133 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.041351719s to process 1 records I0419 18:50:50.910967 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-5d8tg pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-5d8tg\" is waiting to start: ContainerCreating" I0419 18:50:50.910979 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-5d8tg\" is waiting to start: ContainerCreating" I0419 18:50:50.911004 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-9bgzm pod in namespace openshift-ingress-canary (previous: false). I0419 18:50:51.111167 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-9bgzm pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-9bgzm\" is waiting to start: ContainerCreating" I0419 18:50:51.111180 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-9bgzm\" is waiting to start: ContainerCreating" I0419 18:50:51.111206 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-qmfrp pod in namespace openshift-ingress-canary (previous: false). I0419 18:50:51.324197 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-qmfrp pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-qmfrp\" is waiting to start: ContainerCreating" I0419 18:50:51.324214 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-qmfrp\" is waiting to start: ContainerCreating" I0419 18:50:51.324224 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for migrator container migrator-7d5f866c57-h4ncl pod in namespace openshift-kube-storage-version-migrator (previous: false). I0419 18:50:51.516191 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for graceful-termination container migrator-7d5f866c57-h4ncl pod in namespace openshift-kube-storage-version-migrator (previous: false). I0419 18:50:51.711023 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-storage-version-migrator-operator container kube-storage-version-migrator-operator-74848b4cb9-4xgfq pod in namespace openshift-kube-storage-version-migrator-operator (previous: false). I0419 18:50:51.923772 1 tasks_processing.go:74] worker 58 stopped. I0419 18:50:51.923913 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=4cee687e01433ece207089ec00fc1f147a84e4c0feac4623c4195c8eb2436d5b I0419 18:50:51.923995 1 recorder.go:75] Recording events/openshift-dns with fingerprint=3fc14d4b342b7b0704a19517a2675020eb87ee3e80e4b9044818d781f53fdc3f I0419 18:50:51.924095 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=5af55105b14f0c6d5bee9ba0bf9da6ff46f7b5b040b50e0b50e279dedf0b8016 I0419 18:50:51.924136 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=caf98429525981d72a700288259e3a848176c2f9aca29868d9e648c6680df4a1 I0419 18:50:51.924196 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=9622734d84490ea6996bc1160155739f0f2f3aea82a98fc5e3fc0b505efe1659 I0419 18:50:51.924224 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=a34d2cd28c1cb455437718e294d012a981ada0206cf4e8c7ecca7104b723d090 I0419 18:50:51.924250 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator with fingerprint=99202f20ee6174588f36dd778e8284624b31233b497b20c9877dd4d4b67b86a4 I0419 18:50:51.924312 1 recorder.go:75] Recording events/openshift-kube-storage-version-migrator-operator with fingerprint=4aa1b6532d45662981c2f618a352cd59a4d52a31ac1ef05e422d4fe8a6af0388 I0419 18:50:51.924478 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-5stlr with fingerprint=11e04bafd2b3eba421297aa5459fbb1ca9c7aff691e2f8f23d945499adccc762 I0419 18:50:51.924603 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-8lg8j with fingerprint=c61d0226f5d324a2c9d3ffe668e4eeb8affbf40b9e3fbf3a0124c9d42d1bdc1b I0419 18:50:51.924698 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-rfcxb with fingerprint=25d813e8c0b3773077b5c7b71bbbdfab656fa5964411171bd506e8b3ba6aa00b I0419 18:50:51.924820 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-55f9d5c65d-bmpqm with fingerprint=9f54add36df10dc398f7d012d118880ca45df0c271ed94104fafd702c80845a6 I0419 18:50:51.924948 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-55f9d5c65d-nknwk with fingerprint=ea6273a33f92c47b33d9e3051e117c7ff8803de6025b5683947c543b4d3de8a6 I0419 18:50:51.925059 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7f485cb77-czkt4 with fingerprint=83c17eb32560d6b7c11aad728d64f290467714d1417510a9f89c55be761e62e1 I0419 18:50:51.925131 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-5d8tg with fingerprint=b208468fa679451955ac6b93105c2b98fa7345193fda4d4df8a4f3b0d6caad75 I0419 18:50:51.925198 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-9bgzm with fingerprint=aeb23a13cfc8d84db535d27ee557d7875e5c56a003bddcfe1ebf4a89ff65eac9 I0419 18:50:51.925264 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-qmfrp with fingerprint=705b448403eadcb32f64d240ec464b45dc2b579901bc7b51949327a8b2dfcee7 I0419 18:50:51.925291 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-h4ncl/migrator_current.log with fingerprint=a2fb3bad57b588210dbd90cd1d5bfc8d43b259cd4015d012a54c2765234a236e I0419 18:50:51.925301 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator/logs/migrator-7d5f866c57-h4ncl/graceful-termination_current.log with fingerprint=a30a8524ebe7defc82c03ec694e58c76c42fc852b33cd365b6e6059e2cb68680 I0419 18:50:51.925386 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/logs/kube-storage-version-migrator-operator-74848b4cb9-4xgfq/kube-storage-version-migrator-operator_current.log with fingerprint=2a9cb68d6fab7951d1773d44eafa7602c02db2dd7fc323e725433e7800b7d87d I0419 18:50:51.925400 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 6.079700158s to process 20 records I0419 18:50:58.437725 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 18:50:58.738994 1 tasks_processing.go:74] worker 42 stopped. I0419 18:50:58.739045 1 recorder.go:75] Recording config/installplans with fingerprint=f17dbfacc3bfddf27ca3b213b39495434cd4c4e9e3dbd69566ffb3845bbcf539 I0419 18:50:58.739064 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.893855743s to process 1 records I0419 18:50:59.248988 1 tasks_processing.go:74] worker 33 stopped. I0419 18:50:59.249405 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=c89e48e19f2ee679cbbc8ca0a7fdb135e655b3c822496630920a7fdbed5c113c I0419 18:50:59.249428 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.407025678s to process 1 records E0419 18:50:59.249498 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.408s with: function \"pod_network_connectivity_checks\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"support_secret\" failed with an error, function \"overlapping_namespace_uids\" failed with an error, function \"ingress_certificates\" failed with an error, function \"config_maps\" failed with an error, function \"dvo_metrics\" failed with an error" I0419 18:50:59.250611 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "overlapping_namespace_uids" failed with an error, function "ingress_certificates" failed with an error, function "config_maps" failed with an error, function "dvo_metrics" failed with an error I0419 18:50:59.250628 1 periodic.go:209] Running workloads gatherer I0419 18:50:59.250645 1 tasks_processing.go:45] number of workers: 2 I0419 18:50:59.250651 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 18:50:59.250655 1 tasks_processing.go:71] worker 1 working on workload_info task. I0419 18:50:59.250671 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 18:50:59.250691 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0419 18:50:59.278233 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0419 18:50:59.283256 1 tasks_processing.go:74] worker 0 stopped. I0419 18:50:59.283272 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 32.545579ms to process 0 records I0419 18:50:59.288163 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (11ms) I0419 18:50:59.298044 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (10ms) I0419 18:50:59.308008 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (10ms) I0419 18:50:59.317658 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (10ms) I0419 18:50:59.327484 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (10ms) I0419 18:50:59.337966 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (10ms) I0419 18:50:59.347532 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (10ms) I0419 18:50:59.357995 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (10ms) I0419 18:50:59.368876 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (11ms) I0419 18:50:59.379571 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (11ms) I0419 18:50:59.390593 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (11ms) I0419 18:50:59.481081 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 18:50:59.488913 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (98ms) I0419 18:50:59.589147 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (100ms) I0419 18:50:59.684341 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 18:50:59.689217 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (100ms) I0419 18:50:59.789070 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (100ms) I0419 18:50:59.888358 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (99ms) I0419 18:50:59.988278 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (100ms) I0419 18:51:00.088841 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (101ms) I0419 18:51:00.188779 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (100ms) I0419 18:51:00.289390 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (101ms) I0419 18:51:00.390036 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (101ms) I0419 18:51:00.488716 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (99ms) I0419 18:51:00.588351 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0419 18:51:00.702462 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (114ms) I0419 18:51:00.793343 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (91ms) I0419 18:51:00.889964 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (97ms) I0419 18:51:00.988909 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (99ms) I0419 18:51:01.089110 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (100ms) I0419 18:51:01.192040 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (103ms) I0419 18:51:01.290055 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (98ms) I0419 18:51:01.388843 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (99ms) I0419 18:51:01.488779 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (100ms) I0419 18:51:01.488806 1 tasks_processing.go:74] worker 1 stopped. E0419 18:51:01.488816 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0419 18:51:01.489123 1 recorder.go:75] Recording config/workload_info with fingerprint=17a253f60266fd2a50f02a4ff123b86d979da6903c0be60bb8a6fe8a80364d5b I0419 18:51:01.489155 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.238143079s to process 1 records E0419 18:51:01.489184 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.238s with: function \"workload_info\" failed with an error" I0419 18:51:01.490286 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0419 18:51:01.490298 1 periodic.go:209] Running conditional gatherer I0419 18:51:01.496624 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0419 18:51:01.502995 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.15:48853->172.30.0.10:53: read: connection refused E0419 18:51:01.503232 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 18:51:01.503288 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0419 18:51:01.509977 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0419 18:51:01.509990 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.509995 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.509999 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510002 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510005 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510008 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510011 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510013 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 18:51:01.510016 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0419 18:51:01.510030 1 tasks_processing.go:45] number of workers: 3 I0419 18:51:01.510038 1 tasks_processing.go:69] worker 2 listening for tasks. I0419 18:51:01.510042 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0419 18:51:01.510048 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 18:51:01.510056 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 18:51:01.510070 1 tasks_processing.go:74] worker 1 stopped. I0419 18:51:01.510083 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0419 18:51:01.510084 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0419 18:51:01.510113 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0419 18:51:01.510138 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 722ns to process 1 records I0419 18:51:01.510172 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0419 18:51:01.510179 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.164µs to process 1 records I0419 18:51:01.510185 1 tasks_processing.go:74] worker 0 stopped. I0419 18:51:01.510341 1 tasks_processing.go:74] worker 2 stopped. I0419 18:51:01.510353 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 248.597µs to process 0 records I0419 18:51:01.510373 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.15:48853->172.30.0.10:53: read: connection refused I0419 18:51:01.510391 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0419 18:51:01.533824 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=eade2953d4f52daa3c0eb2aa00f3444ec2c9fa3630717d050f6b09b6493f66d5 I0419 18:51:01.533964 1 diskrecorder.go:70] Writing 116 records to /var/lib/insights-operator/insights-2026-04-19-185101.tar.gz I0419 18:51:01.541450 1 diskrecorder.go:51] Wrote 116 records to disk in 7ms I0419 18:51:01.541479 1 periodic.go:278] Gathering cluster info every 2h0m0s I0419 18:51:01.541494 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0419 18:51:12.398233 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 18:51:54.848742 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="6cfd2c86e287ff7a0e6e60ea8f6332af20b0ebc00dd3b9a6f1146265dde7b1b3") W0419 18:51:54.848770 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was created I0419 18:51:54.848817 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0419 18:51:54.848863 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0419 18:51:54.848873 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="1b586d71d5cec2b590ce38a13d1f58d589fbd8d36ce301cdf904e678f653e95d") I0419 18:51:54.848889 1 base_controller.go:181] Shutting down LoggingSyncer ... I0419 18:51:54.848893 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0419 18:51:54.848907 1 periodic.go:170] Shutting down I0419 18:51:54.848918 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0419 18:51:54.848925 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0419 18:51:54.848934 1 genericapiserver.go:651] "[graceful-termination] not going to wait for active watch request(s) to drain" I0419 18:51:54.848951 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0419 18:51:54.848961 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController I0419 18:51:54.848972 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0419 18:51:54.848952 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0419 18:51:54.848984 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="6647663ed5255fe89aea093d82ebeafe758bb2657b3ad7fb8e162a9f45f36e27") I0419 18:51:54.848958 1 base_controller.go:181] Shutting down ConfigController ... I0419 18:51:54.849006 1 base_controller.go:113] All ConfigController workers have been terminated I0419 18:51:54.848995 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController" E0419 18:51:54.849012 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled