W0419 19:00:26.050184 1 cmd.go:257] Using insecure, self-signed certificates I0419 19:00:26.307535 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:26.307840 1 observer_polling.go:159] Starting file observer I0419 19:00:26.758867 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0419 19:00:26.759061 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0419 19:00:26.759539 1 secure_serving.go:57] Forcing use of http/1.1 only W0419 19:00:26.759560 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0419 19:00:26.759565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0419 19:00:26.759569 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. I0419 19:00:26.759565 1 simple_featuregate_reader.go:171] Starting feature-gate-detector W0419 19:00:26.759573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0419 19:00:26.759604 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0419 19:00:26.759608 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0419 19:00:26.762807 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0419 19:00:26.762824 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"5ead446f-4f34-4821-be87-aec76139278f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0419 19:00:26.763705 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0419 19:00:26.763709 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0419 19:00:26.763727 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0419 19:00:26.763727 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0419 19:00:26.763741 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0419 19:00:26.763732 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0419 19:00:26.763965 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-1384778109/tls.crt::/tmp/serving-cert-1384778109/tls.key" I0419 19:00:26.764244 1 secure_serving.go:213] Serving securely on [::]:8443 I0419 19:00:26.764269 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0419 19:00:26.770330 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0419 19:00:26.770353 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0419 19:00:26.770449 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0419 19:00:26.774905 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0419 19:00:26.774933 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0419 19:00:26.778861 1 secretconfigobserver.go:119] support secret does not exist I0419 19:00:26.782586 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0419 19:00:26.786031 1 secretconfigobserver.go:119] support secret does not exist I0419 19:00:26.788290 1 recorder.go:161] Pruning old reports every 7h48m21s, max age is 288h0m0s I0419 19:00:26.792010 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0419 19:00:26.792025 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0419 19:00:26.792031 1 periodic.go:209] Running clusterconfig gatherer I0419 19:00:26.792063 1 tasks_processing.go:45] number of workers: 64 I0419 19:00:26.792099 1 tasks_processing.go:69] worker 63 listening for tasks. I0419 19:00:26.792105 1 tasks_processing.go:71] worker 63 working on openstack_dataplanedeployments task. I0419 19:00:26.792107 1 tasks_processing.go:69] worker 30 listening for tasks. I0419 19:00:26.792029 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0419 19:00:26.792115 1 tasks_processing.go:69] worker 46 listening for tasks. I0419 19:00:26.792118 1 insightsreport.go:296] Starting report retriever I0419 19:00:26.792122 1 tasks_processing.go:69] worker 39 listening for tasks. I0419 19:00:26.792125 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0419 19:00:26.792128 1 tasks_processing.go:69] worker 40 listening for tasks. I0419 19:00:26.792132 1 tasks_processing.go:69] worker 44 listening for tasks. I0419 19:00:26.792134 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 19:00:26.792138 1 tasks_processing.go:69] worker 45 listening for tasks. I0419 19:00:26.792140 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 19:00:26.792137 1 tasks_processing.go:69] worker 43 listening for tasks. I0419 19:00:26.792144 1 tasks_processing.go:69] worker 41 listening for tasks. I0419 19:00:26.792148 1 tasks_processing.go:69] worker 54 listening for tasks. I0419 19:00:26.792142 1 tasks_processing.go:69] worker 38 listening for tasks. I0419 19:00:26.792146 1 tasks_processing.go:69] worker 16 listening for tasks. I0419 19:00:26.792153 1 tasks_processing.go:69] worker 42 listening for tasks. I0419 19:00:26.792155 1 tasks_processing.go:69] worker 31 listening for tasks. I0419 19:00:26.792159 1 tasks_processing.go:69] worker 47 listening for tasks. I0419 19:00:26.792155 1 tasks_processing.go:69] worker 17 listening for tasks. I0419 19:00:26.792160 1 tasks_processing.go:69] worker 15 listening for tasks. I0419 19:00:26.792168 1 tasks_processing.go:69] worker 49 listening for tasks. I0419 19:00:26.792170 1 tasks_processing.go:69] worker 19 listening for tasks. I0419 19:00:26.792170 1 tasks_processing.go:69] worker 59 listening for tasks. I0419 19:00:26.792169 1 tasks_processing.go:69] worker 58 listening for tasks. I0419 19:00:26.792178 1 tasks_processing.go:69] worker 20 listening for tasks. I0419 19:00:26.792180 1 tasks_processing.go:69] worker 5 listening for tasks. I0419 19:00:26.792179 1 tasks_processing.go:69] worker 51 listening for tasks. I0419 19:00:26.792184 1 tasks_processing.go:69] worker 52 listening for tasks. I0419 19:00:26.792187 1 tasks_processing.go:69] worker 55 listening for tasks. I0419 19:00:26.792187 1 tasks_processing.go:69] worker 3 listening for tasks. I0419 19:00:26.792191 1 tasks_processing.go:69] worker 4 listening for tasks. I0419 19:00:26.792195 1 tasks_processing.go:69] worker 11 listening for tasks. I0419 19:00:26.792198 1 tasks_processing.go:69] worker 13 listening for tasks. I0419 19:00:26.792200 1 tasks_processing.go:69] worker 61 listening for tasks. I0419 19:00:26.792200 1 tasks_processing.go:69] worker 57 listening for tasks. I0419 19:00:26.792202 1 tasks_processing.go:69] worker 22 listening for tasks. I0419 19:00:26.792205 1 tasks_processing.go:69] worker 12 listening for tasks. I0419 19:00:26.792206 1 tasks_processing.go:69] worker 36 listening for tasks. I0419 19:00:26.792210 1 tasks_processing.go:69] worker 27 listening for tasks. I0419 19:00:26.792212 1 tasks_processing.go:69] worker 24 listening for tasks. I0419 19:00:26.792213 1 tasks_processing.go:69] worker 25 listening for tasks. I0419 19:00:26.792213 1 tasks_processing.go:69] worker 28 listening for tasks. I0419 19:00:26.792211 1 tasks_processing.go:69] worker 56 listening for tasks. I0419 19:00:26.792214 1 tasks_processing.go:69] worker 7 listening for tasks. I0419 19:00:26.792207 1 tasks_processing.go:69] worker 26 listening for tasks. I0419 19:00:26.792198 1 tasks_processing.go:69] worker 35 listening for tasks. I0419 19:00:26.792227 1 tasks_processing.go:69] worker 6 listening for tasks. I0419 19:00:26.792162 1 tasks_processing.go:69] worker 18 listening for tasks. I0419 19:00:26.792195 1 tasks_processing.go:69] worker 21 listening for tasks. I0419 19:00:26.792175 1 tasks_processing.go:69] worker 50 listening for tasks. I0419 19:00:26.792165 1 tasks_processing.go:69] worker 60 listening for tasks. I0419 19:00:26.792164 1 tasks_processing.go:69] worker 48 listening for tasks. I0419 19:00:26.792172 1 tasks_processing.go:69] worker 32 listening for tasks. I0419 19:00:26.792176 1 tasks_processing.go:69] worker 2 listening for tasks. I0419 19:00:26.792181 1 tasks_processing.go:69] worker 9 listening for tasks. I0419 19:00:26.792180 1 tasks_processing.go:69] worker 33 listening for tasks. I0419 19:00:26.792188 1 tasks_processing.go:69] worker 10 listening for tasks. I0419 19:00:26.792191 1 tasks_processing.go:69] worker 34 listening for tasks. I0419 19:00:26.792191 1 tasks_processing.go:69] worker 53 listening for tasks. I0419 19:00:26.792198 1 tasks_processing.go:69] worker 8 listening for tasks. I0419 19:00:26.792205 1 tasks_processing.go:69] worker 14 listening for tasks. I0419 19:00:26.792215 1 tasks_processing.go:69] worker 62 listening for tasks. I0419 19:00:26.792216 1 tasks_processing.go:69] worker 37 listening for tasks. I0419 19:00:26.792218 1 tasks_processing.go:69] worker 23 listening for tasks. I0419 19:00:26.792219 1 tasks_processing.go:69] worker 29 listening for tasks. I0419 19:00:26.792285 1 tasks_processing.go:71] worker 40 working on authentication task. I0419 19:00:26.792289 1 tasks_processing.go:71] worker 39 working on pod_network_connectivity_checks task. I0419 19:00:26.792293 1 tasks_processing.go:71] worker 43 working on nodenetworkconfigurationpolicies task. I0419 19:00:26.792295 1 tasks_processing.go:71] worker 29 working on qemu_kubevirt_launcher_logs task. I0419 19:00:26.792299 1 tasks_processing.go:71] worker 0 working on container_runtime_configs task. I0419 19:00:26.792300 1 tasks_processing.go:71] worker 30 working on storage_cluster task. I0419 19:00:26.792290 1 tasks_processing.go:71] worker 1 working on dvo_metrics task. I0419 19:00:26.792355 1 tasks_processing.go:71] worker 45 working on ceph_cluster task. I0419 19:00:26.792361 1 tasks_processing.go:71] worker 27 working on ingress task. I0419 19:00:26.792369 1 tasks_processing.go:71] worker 48 working on cost_management_metrics_configs task. I0419 19:00:26.792382 1 tasks_processing.go:71] worker 26 working on cluster_apiserver task. I0419 19:00:26.792396 1 tasks_processing.go:71] worker 41 working on image_registries task. I0419 19:00:26.792420 1 tasks_processing.go:71] worker 5 working on feature_gates task. I0419 19:00:26.792446 1 tasks_processing.go:71] worker 15 working on sap_datahubs task. I0419 19:00:26.792437 1 tasks_processing.go:71] worker 54 working on node_logs task. I0419 19:00:26.792501 1 tasks_processing.go:71] worker 18 working on machine_healthchecks task. I0419 19:00:26.792510 1 tasks_processing.go:71] worker 28 working on olm_operators task. I0419 19:00:26.792516 1 tasks_processing.go:71] worker 59 working on lokistack task. I0419 19:00:26.792532 1 tasks_processing.go:71] worker 58 working on jaegers task. I0419 19:00:26.792602 1 tasks_processing.go:71] worker 21 working on proxies task. I0419 19:00:26.792644 1 tasks_processing.go:71] worker 16 working on validating_webhook_configurations task. I0419 19:00:26.792652 1 tasks_processing.go:71] worker 35 working on install_plans task. I0419 19:00:26.792673 1 tasks_processing.go:71] worker 49 working on version task. I0419 19:00:26.792717 1 tasks_processing.go:71] worker 24 working on machine_configs task. I0419 19:00:26.792837 1 tasks_processing.go:71] worker 13 working on mutating_webhook_configurations task. I0419 19:00:26.792903 1 tasks_processing.go:71] worker 50 working on nodenetworkstates task. I0419 19:00:26.792935 1 tasks_processing.go:71] worker 6 working on support_secret task. I0419 19:00:26.793001 1 tasks_processing.go:71] worker 19 working on silenced_alerts task. W0419 19:00:26.793029 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:26.793036 1 tasks_processing.go:71] worker 19 working on service_accounts task. I0419 19:00:26.793065 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 23.975µs to process 0 records I0419 19:00:26.793068 1 tasks_processing.go:71] worker 11 working on certificate_signing_requests task. I0419 19:00:26.793084 1 tasks_processing.go:71] worker 52 working on config_maps task. I0419 19:00:26.793087 1 tasks_processing.go:71] worker 56 working on metrics task. W0419 19:00:26.793105 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:26.793117 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 19.979µs to process 0 records I0419 19:00:26.793124 1 tasks_processing.go:71] worker 7 working on clusterroles task. I0419 19:00:26.793141 1 tasks_processing.go:71] worker 20 working on openshift_machine_api_events task. I0419 19:00:26.793232 1 tasks_processing.go:71] worker 25 working on image task. I0419 19:00:26.793242 1 tasks_processing.go:71] worker 2 working on machines task. I0419 19:00:26.793256 1 tasks_processing.go:71] worker 61 working on infrastructures task. I0419 19:00:26.793269 1 tasks_processing.go:71] worker 22 working on machine_autoscalers task. I0419 19:00:26.793455 1 tasks_processing.go:71] worker 53 working on overlapping_namespace_uids task. I0419 19:00:26.793498 1 tasks_processing.go:71] worker 57 working on image_pruners task. I0419 19:00:26.792293 1 tasks_processing.go:71] worker 46 working on tsdb_status task. I0419 19:00:26.793080 1 tasks_processing.go:71] worker 51 working on nodes task. I0419 19:00:26.793513 1 tasks_processing.go:71] worker 33 working on operators_pods_and_events task. I0419 19:00:26.793576 1 tasks_processing.go:71] worker 10 working on oauths task. I0419 19:00:26.793737 1 tasks_processing.go:71] worker 42 working on networks task. I0419 19:00:26.793816 1 tasks_processing.go:71] worker 3 working on schedulers task. I0419 19:00:26.793935 1 tasks_processing.go:71] worker 47 working on monitoring_persistent_volumes task. I0419 19:00:26.793984 1 tasks_processing.go:71] worker 9 working on operators task. I0419 19:00:26.794031 1 tasks_processing.go:71] worker 23 working on ingress_certificates task. I0419 19:00:26.794186 1 tasks_processing.go:71] worker 60 working on pdbs task. I0419 19:00:26.794235 1 tasks_processing.go:71] worker 31 working on openstack_controlplanes task. I0419 19:00:26.792295 1 tasks_processing.go:71] worker 44 working on storage_classes task. I0419 19:00:26.794291 1 tasks_processing.go:71] worker 37 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. W0419 19:00:26.794346 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:26.794351 1 tasks_processing.go:71] worker 55 working on crds task. I0419 19:00:26.794368 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 32.692µs to process 0 records I0419 19:00:26.794429 1 tasks_processing.go:74] worker 56 stopped. I0419 19:00:26.793232 1 tasks_processing.go:71] worker 34 working on openstack_version task. I0419 19:00:26.794449 1 tasks_processing.go:74] worker 46 stopped. I0419 19:00:26.793962 1 tasks_processing.go:71] worker 12 working on container_images task. I0419 19:00:26.794980 1 tasks_processing.go:74] worker 43 stopped. I0419 19:00:26.795109 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 2.676959ms to process 0 records E0419 19:00:26.795126 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0419 19:00:26.793967 1 tasks_processing.go:71] worker 36 working on machine_config_pools task. I0419 19:00:26.795132 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 2.727926ms to process 0 records I0419 19:00:26.795141 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 2.995378ms to process 0 records I0419 19:00:26.795154 1 tasks_processing.go:74] worker 63 stopped. I0419 19:00:26.795159 1 tasks_processing.go:74] worker 39 stopped. I0419 19:00:26.793977 1 tasks_processing.go:71] worker 4 working on machine_sets task. I0419 19:00:26.792289 1 tasks_processing.go:71] worker 38 working on active_alerts task. W0419 19:00:26.795466 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:26.795482 1 tasks_processing.go:74] worker 38 stopped. I0419 19:00:26.793982 1 tasks_processing.go:71] worker 62 working on openshift_logging task. I0419 19:00:26.793985 1 tasks_processing.go:71] worker 8 working on aggregated_monitoring_cr_names task. I0419 19:00:26.793989 1 tasks_processing.go:71] worker 14 working on sap_pods task. I0419 19:00:26.793238 1 tasks_processing.go:71] worker 32 working on sap_config task. I0419 19:00:26.793976 1 tasks_processing.go:71] worker 17 working on openstack_dataplanenodesets task. I0419 19:00:26.795543 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 155.612µs to process 0 records I0419 19:00:26.797326 1 tasks_processing.go:74] worker 45 stopped. I0419 19:00:26.797337 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 4.95836ms to process 0 records I0419 19:00:26.799736 1 tasks_processing.go:74] worker 0 stopped. I0419 19:00:26.799749 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 7.42828ms to process 0 records I0419 19:00:26.799900 1 gather_logs.go:145] no pods in namespace were found I0419 19:00:26.799916 1 tasks_processing.go:74] worker 29 stopped. I0419 19:00:26.799922 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 7.613524ms to process 0 records I0419 19:00:26.799931 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 7.396856ms to process 0 records I0419 19:00:26.799937 1 tasks_processing.go:74] worker 59 stopped. I0419 19:00:26.799951 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0419 19:00:26.799964 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0419 19:00:26.799973 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0419 19:00:26.799981 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0419 19:00:26.799995 1 controller.go:489] The operator is still being initialized I0419 19:00:26.800002 1 controller.go:512] The operator is healthy I0419 19:00:26.800013 1 tasks_processing.go:74] worker 30 stopped. I0419 19:00:26.800027 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 7.700946ms to process 0 records I0419 19:00:26.800043 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 7.558772ms to process 0 records I0419 19:00:26.800053 1 tasks_processing.go:74] worker 15 stopped. I0419 19:00:26.800126 1 tasks_processing.go:74] worker 40 stopped. I0419 19:00:26.800438 1 recorder.go:75] Recording config/authentication with fingerprint=14acbb4aca3d572145aacebc4831e0e41a2bab681041b4c4fbd5f5b652438d75 I0419 19:00:26.800455 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 7.749572ms to process 1 records E0419 19:00:26.800468 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0419 19:00:26.800478 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 7.185393ms to process 0 records I0419 19:00:26.800485 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 7.762013ms to process 0 records I0419 19:00:26.800533 1 tasks_processing.go:74] worker 6 stopped. I0419 19:00:26.800543 1 tasks_processing.go:74] worker 48 stopped. I0419 19:00:26.800600 1 tasks_processing.go:74] worker 26 stopped. I0419 19:00:26.800625 1 recorder.go:75] Recording config/apiserver with fingerprint=8452e5a225d043602ca1e7874de8e144e26bac1b47442397d0a7b87b4a9ff74c I0419 19:00:26.800635 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 7.782643ms to process 1 records E0419 19:00:26.800647 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0419 19:00:26.800656 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 7.700534ms to process 0 records I0419 19:00:26.800712 1 tasks_processing.go:74] worker 18 stopped. I0419 19:00:26.800717 1 recorder.go:75] Recording config/proxy with fingerprint=6385683bd09c1b1787e2493706c448b5108a5d9b0d95d18859c4563aa7f56d38 I0419 19:00:26.800720 1 tasks_processing.go:74] worker 21 stopped. I0419 19:00:26.800725 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 7.622977ms to process 1 records I0419 19:00:26.800837 1 tasks_processing.go:74] worker 27 stopped. I0419 19:00:26.800902 1 recorder.go:75] Recording config/ingress with fingerprint=371b213ba9b4a291d9361d03131fb82289fb3e6291bcb5b4b4392a9222889a1e I0419 19:00:26.800912 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 8.196808ms to process 1 records I0419 19:00:26.800993 1 tasks_processing.go:74] worker 41 stopped. I0419 19:00:26.801491 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=a4c0eba277b1e2936cbaf72ae4efc8a24ebbd79f7bedbaf377086538bdec2ae9 I0419 19:00:26.801504 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 8.371301ms to process 1 records I0419 19:00:26.805793 1 tasks_processing.go:74] worker 5 stopped. I0419 19:00:26.805937 1 recorder.go:75] Recording config/featuregate with fingerprint=71e20fbe69ab210e7a9aae74a8ccd6ed87a9ad7373241a06b60d1f3173f083c3 I0419 19:00:26.805952 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 13.353283ms to process 1 records I0419 19:00:26.813531 1 tasks_processing.go:74] worker 2 stopped. E0419 19:00:26.813550 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0419 19:00:26.813560 1 gather.go:177] gatherer "clusterconfig" function "machines" took 20.27927ms to process 0 records I0419 19:00:26.813608 1 tasks_processing.go:74] worker 4 stopped. I0419 19:00:26.813619 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 18.414038ms to process 0 records I0419 19:00:26.813628 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 19.157458ms to process 0 records I0419 19:00:26.813633 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 17.670934ms to process 0 records I0419 19:00:26.813636 1 tasks_processing.go:74] worker 34 stopped. I0419 19:00:26.813638 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 19.381522ms to process 0 records I0419 19:00:26.813649 1 tasks_processing.go:74] worker 17 stopped. I0419 19:00:26.813645 1 tasks_processing.go:74] worker 31 stopped. I0419 19:00:26.813660 1 tasks_processing.go:74] worker 44 stopped. I0419 19:00:26.813779 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=0a3e29b99e16ded5ebd0762b14fb592ddebedb6eb656f869572f37268cfef632 I0419 19:00:26.813817 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=3324bad4c1519024308ee75f5dce308573453d1a02479c691d0d8ad6ea546bd3 I0419 19:00:26.813825 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 19.36536ms to process 2 records I0419 19:00:26.813901 1 tasks_processing.go:74] worker 13 stopped. I0419 19:00:26.814051 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=ec7485ffc4b975f2fee0ad0fb621a8eba7498a064d2dce03951f62e6d224f847 I0419 19:00:26.814093 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=22e2b521110bdc983f401232bfc8359d46b788cf1167255b132953990a22448d I0419 19:00:26.814125 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=2b7effd496cffa8b7c1cd287a4d530958bf4c10f72adcddbe8b96471c44b24d5 I0419 19:00:26.814136 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 20.87996ms to process 3 records I0419 19:00:26.814148 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 21.250817ms to process 0 records I0419 19:00:26.814153 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 20.518785ms to process 0 records I0419 19:00:26.814158 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 21.361638ms to process 0 records I0419 19:00:26.814162 1 tasks_processing.go:74] worker 58 stopped. I0419 19:00:26.814165 1 tasks_processing.go:74] worker 54 stopped. I0419 19:00:26.814173 1 tasks_processing.go:74] worker 22 stopped. I0419 19:00:26.815211 1 tasks_processing.go:74] worker 62 stopped. I0419 19:00:26.815223 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 19.70475ms to process 0 records I0419 19:00:26.815235 1 tasks_processing.go:74] worker 32 stopped. I0419 19:00:26.815249 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 19.3769ms to process 0 records I0419 19:00:26.815262 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 19.442651ms to process 0 records I0419 19:00:26.815270 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 22.338993ms to process 0 records I0419 19:00:26.815276 1 tasks_processing.go:74] worker 50 stopped. I0419 19:00:26.815281 1 tasks_processing.go:74] worker 14 stopped. I0419 19:00:26.815454 1 tasks_processing.go:74] worker 25 stopped. I0419 19:00:26.815558 1 recorder.go:75] Recording config/image with fingerprint=d7ab0f6a89e2ce775f29a6905478e196634dbe9e149da56207b3caab42f095f9 I0419 19:00:26.815573 1 gather.go:177] gatherer "clusterconfig" function "image" took 22.208246ms to process 1 records I0419 19:00:26.815700 1 tasks_processing.go:74] worker 10 stopped. I0419 19:00:26.815867 1 recorder.go:75] Recording config/oauth with fingerprint=b37f2371f9c4235b1f1f69055586e114741fcd3f6fde171f8bee086ef0dac7a0 I0419 19:00:26.815895 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 22.106896ms to process 1 records I0419 19:00:26.817143 1 tasks_processing.go:74] worker 11 stopped. I0419 19:00:26.817159 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 24.045656ms to process 0 records W0419 19:00:26.817394 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 19:00:26.823930 1 tasks_processing.go:74] worker 20 stopped. I0419 19:00:26.823944 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 30.77151ms to process 0 records I0419 19:00:26.824035 1 tasks_processing.go:74] worker 51 stopped. I0419 19:00:26.824307 1 recorder.go:75] Recording config/node/ip-10-0-0-242.ec2.internal with fingerprint=b932e6a668fceb6d4fa5a3ce362848eb2984421c413b636eae56369a94ee2dae I0419 19:00:26.824365 1 recorder.go:75] Recording config/node/ip-10-0-1-72.ec2.internal with fingerprint=46fd778bdc2bbcbff05aedb744b7631ea282704f7ad30cd499e0be5cbf792879 I0419 19:00:26.824420 1 recorder.go:75] Recording config/node/ip-10-0-2-31.ec2.internal with fingerprint=7a336162467a2980ce97ff1a7fdfa9addb90f86821deaeeea810614705f34e25 I0419 19:00:26.824427 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 30.515915ms to process 3 records I0419 19:00:26.824498 1 tasks_processing.go:74] worker 61 stopped. I0419 19:00:26.824837 1 recorder.go:75] Recording config/infrastructure with fingerprint=2144e8d4338b09644a9a611b1935ae765cf5afc64240ad0e092c0dbe86033393 I0419 19:00:26.824845 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 31.017817ms to process 1 records I0419 19:00:26.824899 1 tasks_processing.go:74] worker 60 stopped. I0419 19:00:26.824937 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=10b57521101da4ecad8fc2740cc517baafca3b1cbda40e8fd098a24caecf6495 I0419 19:00:26.824956 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=7ad1331e9d15ebdef45551e18554fd590dabb76e6b4574fc7cfc701d4010da84 I0419 19:00:26.824974 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=bf7fadfb3cefc580cfd26770f980f4d11443ea308a2f74d640226dda724def8a I0419 19:00:26.824984 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 30.497218ms to process 3 records I0419 19:00:26.825075 1 tasks_processing.go:74] worker 42 stopped. I0419 19:00:26.825104 1 recorder.go:75] Recording config/network with fingerprint=1d828d8323b57c5d454ede6c39ee7e4930dbf718fca228a01763dd23a4e21327 I0419 19:00:26.825113 1 gather.go:177] gatherer "clusterconfig" function "networks" took 31.040805ms to process 1 records I0419 19:00:26.829590 1 tasks_processing.go:74] worker 3 stopped. I0419 19:00:26.829672 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=0971c7ccc13c786c52e228a495328f65282d93b7a9cb43ecf824db351521a605 I0419 19:00:26.829687 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 35.756272ms to process 1 records I0419 19:00:26.832298 1 tasks_processing.go:74] worker 47 stopped. I0419 19:00:26.832314 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 38.305043ms to process 0 records I0419 19:00:26.832686 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0419 19:00:26.832689 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s W0419 19:00:26.832742 1 operator.go:288] started I0419 19:00:26.832774 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0419 19:00:26.833984 1 tasks_processing.go:74] worker 7 stopped. I0419 19:00:26.834174 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=91d782de64b89e256a2dd69385ff71fc889770e132e52fc82bbc2e0c30d9c796 I0419 19:00:26.834266 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=ff8cd54ca35f403cab5c41d1099715572496e192ad64b3d64ef3b79cfe4a7daf I0419 19:00:26.834277 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 40.853161ms to process 2 records I0419 19:00:26.834362 1 tasks_processing.go:74] worker 16 stopped. I0419 19:00:26.834382 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=2ff7f15fe4d6c53fec8f0bc8d8691b9b8604589fa00fa0ee35a425bff7a07b67 I0419 19:00:26.834457 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=891d6d1e796399a439ad9bc10354ee3b9f5d739bf9d111618095dc5ac4f4dbcf I0419 19:00:26.834475 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=fd1c7306404a417a8421501a6c18ba00435a4bc00a90d85a1bcea5616a801222 I0419 19:00:26.834499 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=51b0eab2d6c4a531e27aa4e9dc1261a6ab1e8da7329ddf366a0ab7217b3fc837 I0419 19:00:26.834529 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=3f53679f61e1ac59053dfc6a7b8706548194c3de0ad532ef7ce24b869b2edf52 I0419 19:00:26.834558 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=1e3bfb911509fbba0b1291d74c2823278c1ee8a8f70bd3a7b752b85eac0ec61a I0419 19:00:26.834582 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=fa8c08d0e87bdd19a68dc1d7511d0f32306c787199924dab52c912a419b9acbb I0419 19:00:26.834613 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=33cb38dd7547b6f09f16f331536426a63017a40a4dfba0d3b709700be3172565 I0419 19:00:26.834635 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=0c4461fc08601cb016b95e3fa557714fdda9498af1008add978e973d559b47fe I0419 19:00:26.834656 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=3b9a096ea9e0a3b2f3473033073ef53188839ee92e5fabcbfa8f41aea21ae6d0 I0419 19:00:26.834680 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=5bd9477de82ebd35ba6269836f74ddb69982e8af72ffb1c6ef7d697e5aa5d39d I0419 19:00:26.834687 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 41.597553ms to process 11 records I0419 19:00:26.835026 1 tasks_processing.go:74] worker 53 stopped. I0419 19:00:26.835053 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0419 19:00:26.835065 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 41.557095ms to process 1 records I0419 19:00:26.846273 1 tasks_processing.go:74] worker 55 stopped. I0419 19:00:26.847619 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=cc29375c69f25939c6099db026143f88a6a2052beeff57d9d23a10623ef55721 I0419 19:00:26.847857 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=4b220a844ccd30e3f2734621413012ec898522bc9488ab4bc028f45c566366e5 I0419 19:00:26.847867 1 gather.go:177] gatherer "clusterconfig" function "crds" took 51.899156ms to process 2 records I0419 19:00:26.848159 1 tasks_processing.go:74] worker 12 stopped. I0419 19:00:26.848213 1 recorder.go:75] Recording config/running_containers with fingerprint=ca11b47f35fe83d64b9fc22edc3314b4b6210067e7d8757743610372150c437b I0419 19:00:26.848224 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 53.438662ms to process 1 records I0419 19:00:26.850564 1 tasks_processing.go:74] worker 37 stopped. I0419 19:00:26.850576 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 56.260534ms to process 0 records I0419 19:00:26.856731 1 controller.go:212] Source scaController *sca.Controller is not ready I0419 19:00:26.856748 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0419 19:00:26.856753 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0419 19:00:26.856757 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0419 19:00:26.856760 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0419 19:00:26.856781 1 controller.go:489] The operator is still being initialized I0419 19:00:26.856789 1 controller.go:512] The operator is healthy I0419 19:00:26.858615 1 tasks_processing.go:74] worker 49 stopped. I0419 19:00:26.858866 1 prometheus_rules.go:88] Prometheus rules successfully created I0419 19:00:26.858921 1 recorder.go:75] Recording config/version with fingerprint=6d9d9625615c2583e4810755e868e53e924e18f6bb1f76575354966191639afd I0419 19:00:26.858934 1 recorder.go:75] Recording config/id with fingerprint=d6bdd73db97cb8c68c31bc0e13f81cee2ea50d0b714693ef481b02c92162fafa I0419 19:00:26.858940 1 gather.go:177] gatherer "clusterconfig" function "version" took 65.928179ms to process 2 records E0419 19:00:26.862835 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%273d961f82-3262-4ae5-995e-19bb8b9aaf8a%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.12:35155->172.30.0.10:53: read: connection refused I0419 19:00:26.862846 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%273d961f82-3262-4ae5-995e-19bb8b9aaf8a%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.12:35155->172.30.0.10:53: read: connection refused I0419 19:00:26.864230 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0419 19:00:26.864235 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0419 19:00:26.864304 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0419 19:00:26.871433 1 base_controller.go:82] Caches are synced for ConfigController I0419 19:00:26.871445 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0419 19:00:26.874377 1 tasks_processing.go:74] worker 52 stopped. E0419 19:00:26.874390 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0419 19:00:26.874395 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0419 19:00:26.874399 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0419 19:00:26.874408 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0419 19:00:26.874429 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0419 19:00:26.874435 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0419 19:00:26.874440 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0419 19:00:26.874444 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0419 19:00:26.874489 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0419 19:00:26.874500 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0419 19:00:26.874505 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 81.278399ms to process 7 records I0419 19:00:26.877166 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 19:00:26.877299 1 tasks_processing.go:74] worker 57 stopped. I0419 19:00:26.877372 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=ee78f00a21f90085c3ab957e9b42f15bb4f16ffe8b9dc74c8113cd87622cb17d I0419 19:00:26.877383 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 83.295073ms to process 1 records I0419 19:00:26.908824 1 tasks_processing.go:74] worker 24 stopped. I0419 19:00:26.908847 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0419 19:00:26.908858 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 116.077503ms to process 1 records I0419 19:00:26.914655 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload I0419 19:00:26.916511 1 tasks_processing.go:74] worker 8 stopped. I0419 19:00:26.916524 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 120.922426ms to process 0 records W0419 19:00:26.919699 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.12:37294->172.30.0.10:53: read: connection refused I0419 19:00:26.919710 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.12:37294->172.30.0.10:53: read: connection refused I0419 19:00:26.933556 1 base_controller.go:82] Caches are synced for LoggingSyncer I0419 19:00:26.933579 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0419 19:00:26.952954 1 tasks_processing.go:74] worker 36 stopped. I0419 19:00:26.952974 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 157.811794ms to process 0 records I0419 19:00:26.958432 1 tasks_processing.go:74] worker 23 stopped. E0419 19:00:26.958445 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0419 19:00:26.958452 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ppd8cdu7k8gjuo2pt9paevfufd41bsn-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ppd8cdu7k8gjuo2pt9paevfufd41bsn-primary-cert-bundle-secret" not found I0419 19:00:26.958522 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=328b7af96f85afc03844798d4617d2f59fe6358a26c0fae308fb4c8193d378d2 I0419 19:00:26.958534 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 164.346403ms to process 1 records I0419 19:00:27.232496 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0419 19:00:27.232510 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0419 19:00:27.233109 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-c9qvh pod in namespace openshift-dns (previous: false). I0419 19:00:27.464313 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-c9qvh pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-c9qvh\" is waiting to start: ContainerCreating" I0419 19:00:27.464334 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-c9qvh\" is waiting to start: ContainerCreating" I0419 19:00:27.464345 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-c9qvh pod in namespace openshift-dns (previous: false). I0419 19:00:27.641212 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-c9qvh pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-c9qvh\" is waiting to start: ContainerCreating" I0419 19:00:27.641232 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-c9qvh\" is waiting to start: ContainerCreating" I0419 19:00:27.641243 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-s2v68 pod in namespace openshift-dns (previous: false). W0419 19:00:27.818365 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 19:00:27.840903 1 tasks_processing.go:74] worker 28 stopped. I0419 19:00:27.840917 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 1.048373532s to process 0 records I0419 19:00:27.862042 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-s2v68 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-s2v68\" is waiting to start: ContainerCreating" I0419 19:00:27.862060 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-s2v68\" is waiting to start: ContainerCreating" I0419 19:00:27.862076 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-s2v68 pod in namespace openshift-dns (previous: false). I0419 19:00:28.037734 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-s2v68 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-s2v68\" is waiting to start: ContainerCreating" I0419 19:00:28.037748 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-s2v68\" is waiting to start: ContainerCreating" I0419 19:00:28.037761 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-wtx56 pod in namespace openshift-dns (previous: false). I0419 19:00:28.262329 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wtx56 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-wtx56\" is waiting to start: ContainerCreating" I0419 19:00:28.262344 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-wtx56\" is waiting to start: ContainerCreating" I0419 19:00:28.262352 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-wtx56 pod in namespace openshift-dns (previous: false). I0419 19:00:28.265906 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0419 19:00:28.439586 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wtx56 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-wtx56\" is waiting to start: ContainerCreating" I0419 19:00:28.439600 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-wtx56\" is waiting to start: ContainerCreating" I0419 19:00:28.439610 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-6lm45 pod in namespace openshift-dns (previous: false). I0419 19:00:28.641679 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:28.641695 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-7h9gs pod in namespace openshift-dns (previous: false). I0419 19:00:28.650328 1 tasks_processing.go:74] worker 9 stopped. I0419 19:00:28.650381 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=c4be9fd60a45febde89719fe5873fa36db8bef8329f170140b364c1136767ac0 I0419 19:00:28.650413 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=ff7051bc1ac15744b87682d715364d0be115236418f350583cc2ce46931e9320 I0419 19:00:28.650441 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0419 19:00:28.650467 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=c14de0b6c559f5f28d8d281234ef90f806b765dbd21ad1a80634ae014d3c3d6a I0419 19:00:28.650483 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0419 19:00:28.650505 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=b57c6b84dacc91c9b9ef2dd1afcec093ee1cde0101b38a46a80ab9e74f05f05a I0419 19:00:28.650549 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=5ada8a4360f2566d5c15946f0e0a25d0a3b9455175fba2a903dcb926b34088f5 I0419 19:00:28.650573 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=c43c1b7fa211239d1a73e193201d6d320a44b082b3254b405b2ce7a38bf4e730 I0419 19:00:28.650588 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=5e12e5e2b3b9127fcc6d033b17950df256a3fe322170b9b2afb8c4d9cf55c81c I0419 19:00:28.650607 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=35378f0ae0cef35def6d0edd16c3079ab1eb5ad4dffbbf3c901a218efb6fb5f9 I0419 19:00:28.650617 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0419 19:00:28.650632 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=dbfad3d9f5bdd3011c028f736259fbca48e253acd3b2a4c25c596e4b7e6f046c I0419 19:00:28.650643 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0419 19:00:28.650659 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=06bb0c6bcd9e8f598650406afcbf44bbd04e26a703cb12aaf3de14214f744450 I0419 19:00:28.650668 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0419 19:00:28.650681 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=647e08937cf5a3b20cb1e480f5379c2957a6d7b8541f423b0532895ad501d6f8 I0419 19:00:28.650690 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0419 19:00:28.650705 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=3ebab5b9722769f024d50ce1affdea1b340848201f606129db840731fad6a1ec I0419 19:00:28.650823 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=e42c80cdd1cb74a177fa87a12aaceb56c64cd62e317f598a5bd0ff390bbce7b2 I0419 19:00:28.650832 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0419 19:00:28.650839 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0419 19:00:28.650858 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0419 19:00:28.650906 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=6d9f66577e088046e30d07a16a43e4119f6d671806c2ddf89fa88bd20b49bdfc I0419 19:00:28.650930 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=fc0e37ecb2b1081f3d8438cac29d8ea4cf222ae0dea2115aef97a533757473b6 I0419 19:00:28.650939 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0419 19:00:28.650962 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=1706de274698b8f32615fbb2d40c253de8a7755a6799c2a34f5213593196a410 I0419 19:00:28.650972 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0419 19:00:28.650984 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=2fa36f4002b893121e4d3a3da014605384dc689959163c35c327d7e44e911299 I0419 19:00:28.650998 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=bcb4166e3fc9d59492d93de8f868e936f37676ad568fb9c9ca2b77856201cd27 I0419 19:00:28.651011 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=781bb7b71621881db4b75453a8e2c669e4f0394d5cf5b7d06848519cd716b5b7 I0419 19:00:28.651026 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=291f7383709baab8e7edc22fd2392610f5de9338c4e0b73efe7b079c07804d5f I0419 19:00:28.651039 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=8b57adfba6eb55522a7f439595facab89d6e2d3c3ef1571d28faf17144dfdb70 I0419 19:00:28.651064 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=86ea31cbd67c46815fe86079c9a06ed3fdd4facc397bbdfbfc616918b56571f0 I0419 19:00:28.651079 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0419 19:00:28.651087 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0419 19:00:28.651094 1 gather.go:177] gatherer "clusterconfig" function "operators" took 1.85628431s to process 35 records W0419 19:00:28.817331 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 19:00:28.838217 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:28.838231 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-dxzf2 pod in namespace openshift-dns (previous: false). I0419 19:00:29.043656 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:29.043705 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7cb577f7d4-jc6gt pod in namespace openshift-image-registry (previous: false). I0419 19:00:29.238496 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7cb577f7d4-jc6gt pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7cb577f7d4-jc6gt\" is waiting to start: ContainerCreating" I0419 19:00:29.238510 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7cb577f7d4-jc6gt\" is waiting to start: ContainerCreating" I0419 19:00:29.238555 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7cb577f7d4-pj4wj pod in namespace openshift-image-registry (previous: false). I0419 19:00:29.437476 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7cb577f7d4-pj4wj pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7cb577f7d4-pj4wj\" is waiting to start: ContainerCreating" I0419 19:00:29.437488 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7cb577f7d4-pj4wj\" is waiting to start: ContainerCreating" I0419 19:00:29.437522 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-bd7dddc47-9l9qt pod in namespace openshift-image-registry (previous: false). I0419 19:00:29.639215 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-bd7dddc47-9l9qt pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-bd7dddc47-9l9qt\" is waiting to start: ContainerCreating" I0419 19:00:29.639235 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-bd7dddc47-9l9qt\" is waiting to start: ContainerCreating" I0419 19:00:29.639257 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-jq2zz pod in namespace openshift-image-registry (previous: false). W0419 19:00:29.817045 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 19:00:29.839197 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:29.839211 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-p2sxs pod in namespace openshift-image-registry (previous: false). I0419 19:00:30.038757 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:30.038769 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-rm8tp pod in namespace openshift-image-registry (previous: false). I0419 19:00:30.239494 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0419 19:00:30.239509 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-559b47f4dd-4kxbm pod in namespace openshift-ingress (previous: false). I0419 19:00:30.438432 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-559b47f4dd-4kxbm pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-559b47f4dd-4kxbm\" is waiting to start: ContainerCreating" I0419 19:00:30.438445 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-559b47f4dd-4kxbm\" is waiting to start: ContainerCreating" I0419 19:00:30.438458 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-559b47f4dd-fv9dp pod in namespace openshift-ingress (previous: false). I0419 19:00:30.639405 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-559b47f4dd-fv9dp pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-559b47f4dd-fv9dp\" is waiting to start: ContainerCreating" I0419 19:00:30.639428 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-559b47f4dd-fv9dp\" is waiting to start: ContainerCreating" I0419 19:00:30.639440 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-b69c6c7f9-dkps4 pod in namespace openshift-ingress (previous: false). W0419 19:00:30.817816 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0419 19:00:30.839088 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-b69c6c7f9-dkps4 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-b69c6c7f9-dkps4\" is waiting to start: ContainerCreating" I0419 19:00:30.839111 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-b69c6c7f9-dkps4\" is waiting to start: ContainerCreating" I0419 19:00:30.839128 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-67ntv pod in namespace openshift-ingress-canary (previous: false). I0419 19:00:31.039272 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-67ntv pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-67ntv\" is waiting to start: ContainerCreating" I0419 19:00:31.039287 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-67ntv\" is waiting to start: ContainerCreating" I0419 19:00:31.039299 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-fv58m pod in namespace openshift-ingress-canary (previous: false). I0419 19:00:31.238807 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-fv58m pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-fv58m\" is waiting to start: ContainerCreating" I0419 19:00:31.238820 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-fv58m\" is waiting to start: ContainerCreating" I0419 19:00:31.238829 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-g586t pod in namespace openshift-ingress-canary (previous: false). I0419 19:00:31.438588 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-g586t pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-g586t\" is waiting to start: ContainerCreating" I0419 19:00:31.438599 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-g586t\" is waiting to start: ContainerCreating" I0419 19:00:31.438616 1 tasks_processing.go:74] worker 33 stopped. I0419 19:00:31.438702 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=9a7b464512390012a594debb211ccc8e81f76f64078a7f029b7b25adcf764dc4 I0419 19:00:31.438753 1 recorder.go:75] Recording events/openshift-dns with fingerprint=d4b40a240037078bad7c238d34fb3dc663fb3fe9bcd07a8569ed922f3a42faba I0419 19:00:31.438839 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=e29336aa229d1f38b9c4e167d98a9aed9206571e354e5bff9dfbce1f0c7ec472 I0419 19:00:31.438872 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=c43712f6f3bb3d46f5b645ac49ef29d74adbc5cdd999064494c8100e1661b0bb I0419 19:00:31.438939 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=fb19b69f6a3e77f51972b19062583447abd4a529027beeb6991f40137ed9f672 I0419 19:00:31.438958 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=8224fa37f5e92c1d59cfd60bec76555aa40a571807661a17b8ff7f4c949faf30 I0419 19:00:31.440362 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7cb577f7d4-jc6gt with fingerprint=f8b0dd9fdc87cab2d904eb36029949a782207d50e9bacb63ee6c1bf395aaddb3 I0419 19:00:31.440482 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7cb577f7d4-pj4wj with fingerprint=ae43ad51e911bca75bc9f2ba32a4df3fbedfadcca2894d0f87fe998274a765c5 I0419 19:00:31.440594 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-bd7dddc47-9l9qt with fingerprint=1342db41c5648f5b5e5cdd2b2c4985262753d1b706235b1e5b399f343e67b015 I0419 19:00:31.440605 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.645088538s to process 9 records W0419 19:00:31.817682 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0419 19:00:31.817707 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0419 19:00:31.817721 1 tasks_processing.go:74] worker 1 stopped. E0419 19:00:31.817728 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0419 19:00:31.817738 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0419 19:00:31.817750 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0419 19:00:31.817768 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.025358273s to process 1 records I0419 19:00:39.240076 1 tasks_processing.go:74] worker 35 stopped. I0419 19:00:39.240114 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0419 19:00:39.240126 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.447404376s to process 1 records I0419 19:00:39.963523 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 19:00:39.999154 1 tasks_processing.go:74] worker 19 stopped. I0419 19:00:39.999393 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=5f3315562ab922847b80910efb7b38b1b30885aa3cdda0895bd6e02e66f43fde I0419 19:00:39.999408 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.206105399s to process 1 records E0419 19:00:39.999463 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.207s with: function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0419 19:00:40.000558 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0419 19:00:40.000571 1 periodic.go:209] Running workloads gatherer I0419 19:00:40.000584 1 tasks_processing.go:45] number of workers: 2 I0419 19:00:40.000594 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 19:00:40.000598 1 tasks_processing.go:71] worker 1 working on helmchart_info task. I0419 19:00:40.000603 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 19:00:40.000673 1 tasks_processing.go:71] worker 0 working on workload_info task. I0419 19:00:40.022798 1 tasks_processing.go:74] worker 1 stopped. I0419 19:00:40.022818 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 22.18536ms to process 0 records I0419 19:00:40.022859 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0419 19:00:40.030109 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (9ms) I0419 19:00:40.037041 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (7ms) I0419 19:00:40.043947 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (7ms) I0419 19:00:40.051148 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (7ms) I0419 19:00:40.057840 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (7ms) I0419 19:00:40.064418 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (7ms) I0419 19:00:40.071043 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (7ms) I0419 19:00:40.077637 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (7ms) I0419 19:00:40.086670 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (9ms) I0419 19:00:40.093294 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (7ms) I0419 19:00:40.130171 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (37ms) I0419 19:00:40.229510 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (99ms) I0419 19:00:40.329091 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (100ms) I0419 19:00:40.431046 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (102ms) I0419 19:00:40.528913 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (98ms) I0419 19:00:40.629147 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (100ms) I0419 19:00:40.729675 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (101ms) I0419 19:00:40.832072 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (102ms) I0419 19:00:40.928926 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (97ms) I0419 19:00:41.029097 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (100ms) I0419 19:00:41.129214 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (100ms) I0419 19:00:41.228347 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (99ms) I0419 19:00:41.329967 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (102ms) I0419 19:00:41.428913 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (99ms) I0419 19:00:41.528832 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0419 19:00:41.629344 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (100ms) I0419 19:00:41.729051 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (100ms) I0419 19:00:41.828984 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (100ms) I0419 19:00:41.929982 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (101ms) I0419 19:00:42.030688 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (101ms) I0419 19:00:42.129661 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (99ms) I0419 19:00:42.229328 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (100ms) I0419 19:00:42.229353 1 tasks_processing.go:74] worker 0 stopped. E0419 19:00:42.229362 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0419 19:00:42.229685 1 recorder.go:75] Recording config/workload_info with fingerprint=d06b25a0d3e07f06592ead94f7e9565c868471b7d0e176991c0ea2b8086dd97f I0419 19:00:42.229700 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.228671158s to process 1 records E0419 19:00:42.229736 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.229s with: function \"workload_info\" failed with an error" I0419 19:00:42.230833 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0419 19:00:42.230847 1 periodic.go:209] Running conditional gatherer I0419 19:00:42.235716 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0419 19:00:42.241899 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.12:36544->172.30.0.10:53: read: connection refused E0419 19:00:42.242126 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0419 19:00:42.242181 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0419 19:00:42.247148 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0419 19:00:42.247161 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247167 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247170 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247174 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247178 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247181 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247184 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247187 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0419 19:00:42.247190 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0419 19:00:42.247203 1 tasks_processing.go:45] number of workers: 3 I0419 19:00:42.247214 1 tasks_processing.go:69] worker 2 listening for tasks. I0419 19:00:42.247218 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0419 19:00:42.247222 1 tasks_processing.go:69] worker 0 listening for tasks. I0419 19:00:42.247232 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0419 19:00:42.247234 1 tasks_processing.go:69] worker 1 listening for tasks. I0419 19:00:42.247237 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0419 19:00:42.247242 1 tasks_processing.go:74] worker 1 stopped. I0419 19:00:42.247289 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0419 19:00:42.247300 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 772ns to process 1 records I0419 19:00:42.247329 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0419 19:00:42.247336 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 940ns to process 1 records I0419 19:00:42.247341 1 tasks_processing.go:74] worker 0 stopped. I0419 19:00:42.247464 1 tasks_processing.go:74] worker 2 stopped. I0419 19:00:42.247475 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 219.967µs to process 0 records I0419 19:00:42.247494 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.12:36544->172.30.0.10:53: read: connection refused I0419 19:00:42.247510 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0419 19:00:42.266426 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=40af8c2d4ad7a5cf46e5eb613f80a1f12a15522a8c95c5e550276a943de9b4ce I0419 19:00:42.266538 1 diskrecorder.go:70] Writing 102 records to /var/lib/insights-operator/insights-2026-04-19-190042.tar.gz I0419 19:00:42.272570 1 diskrecorder.go:51] Wrote 102 records to disk in 6ms I0419 19:00:42.272597 1 periodic.go:278] Gathering cluster info every 2h0m0s I0419 19:00:42.272611 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0419 19:00:43.201946 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 19:00:43.403802 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 19:00:51.478525 1 configmapobserver.go:84] configmaps "insights-config" not found I0419 19:01:41.308447 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="401ff1bc18aabe0b14baf705d3d7f3f14379fe738b5042c1de8139077ef21ca2") W0419 19:01:41.308476 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0419 19:01:41.308521 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="7ad01c18de832250e57fcac1da4041ae1ca0a5eb25da4c88989d624fa1d5bcbf") I0419 19:01:41.308534 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0419 19:01:41.308551 1 base_controller.go:181] Shutting down ConfigController ... I0419 19:01:41.308574 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0419 19:01:41.308592 1 periodic.go:170] Shutting down I0419 19:01:41.308606 1 base_controller.go:181] Shutting down LoggingSyncer ... I0419 19:01:41.308608 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="9a9cedcc0dfd43a093f62f7fbdb41678f6639e36e88ae760b14a58a2cd828ba6") I0419 19:01:41.308622 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0419 19:01:41.308629 1 base_controller.go:113] All ConfigController workers have been terminated I0419 19:01:41.308645 1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/tmp/serving-cert-1384778109/tls.crt::/tmp/serving-cert-1384778109/tls.key" I0419 19:01:41.308651 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0419 19:01:41.308658 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0419 19:01:41.308665 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0419 19:01:41.308656 1 genericapiserver.go:651] "[graceful-termination] not going to wait for active watch request(s) to drain" I0419 19:01:41.308673 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController I0419 19:01:41.308681 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0419 19:01:41.308684 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0419 19:01:41.308699 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0419 19:01:41.308697 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"