W0428 11:10:52.463218 1 cmd.go:257] Using insecure, self-signed certificates I0428 11:10:53.119915 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:10:53.120241 1 observer_polling.go:159] Starting file observer I0428 11:10:53.758151 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0428 11:10:53.758374 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0428 11:10:53.758883 1 secure_serving.go:57] Forcing use of http/1.1 only W0428 11:10:53.758902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0428 11:10:53.758906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0428 11:10:53.758910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0428 11:10:53.758912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0428 11:10:53.758915 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0428 11:10:53.758917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0428 11:10:53.758972 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0428 11:10:53.762704 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0428 11:10:53.762720 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"12a5dc5c-d339-4bea-9ce2-a2e8e6fe7d73", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0428 11:10:53.764023 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0428 11:10:53.764038 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0428 11:10:53.764039 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0428 11:10:53.764042 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0428 11:10:53.764055 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0428 11:10:53.764059 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0428 11:10:53.764303 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-3970985648/tls.crt::/tmp/serving-cert-3970985648/tls.key" I0428 11:10:53.764558 1 secure_serving.go:213] Serving securely on [::]:8443 I0428 11:10:53.764585 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0428 11:10:53.767925 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0428 11:10:53.767952 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0428 11:10:53.768029 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0428 11:10:53.774182 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0428 11:10:53.774201 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0428 11:10:53.778526 1 secretconfigobserver.go:119] support secret does not exist I0428 11:10:53.785061 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0428 11:10:53.788769 1 secretconfigobserver.go:119] support secret does not exist I0428 11:10:53.790844 1 recorder.go:161] Pruning old reports every 4h19m29s, max age is 288h0m0s I0428 11:10:53.795124 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0428 11:10:53.795144 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0428 11:10:53.795147 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0428 11:10:53.795150 1 periodic.go:209] Running clusterconfig gatherer I0428 11:10:53.795157 1 insightsreport.go:296] Starting report retriever I0428 11:10:53.795167 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0428 11:10:53.795188 1 tasks_processing.go:45] number of workers: 64 I0428 11:10:53.795210 1 tasks_processing.go:69] worker 2 listening for tasks. I0428 11:10:53.795220 1 tasks_processing.go:69] worker 1 listening for tasks. I0428 11:10:53.795220 1 tasks_processing.go:69] worker 0 listening for tasks. I0428 11:10:53.795226 1 tasks_processing.go:69] worker 38 listening for tasks. I0428 11:10:53.795227 1 tasks_processing.go:69] worker 22 listening for tasks. I0428 11:10:53.795229 1 tasks_processing.go:71] worker 0 working on ingress_certificates task. I0428 11:10:53.795234 1 tasks_processing.go:69] worker 3 listening for tasks. I0428 11:10:53.795230 1 tasks_processing.go:69] worker 50 listening for tasks. I0428 11:10:53.795248 1 tasks_processing.go:71] worker 3 working on overlapping_namespace_uids task. I0428 11:10:53.795255 1 tasks_processing.go:69] worker 6 listening for tasks. I0428 11:10:53.795255 1 tasks_processing.go:69] worker 39 listening for tasks. I0428 11:10:53.795260 1 tasks_processing.go:71] worker 6 working on openstack_controlplanes task. I0428 11:10:53.795263 1 tasks_processing.go:69] worker 40 listening for tasks. I0428 11:10:53.795265 1 tasks_processing.go:69] worker 55 listening for tasks. I0428 11:10:53.795262 1 tasks_processing.go:69] worker 54 listening for tasks. I0428 11:10:53.795244 1 tasks_processing.go:69] worker 4 listening for tasks. I0428 11:10:53.795249 1 tasks_processing.go:69] worker 5 listening for tasks. I0428 11:10:53.795283 1 tasks_processing.go:69] worker 44 listening for tasks. I0428 11:10:53.795284 1 tasks_processing.go:69] worker 56 listening for tasks. I0428 11:10:53.795287 1 tasks_processing.go:69] worker 53 listening for tasks. I0428 11:10:53.795294 1 tasks_processing.go:69] worker 57 listening for tasks. I0428 11:10:53.795298 1 tasks_processing.go:69] worker 14 listening for tasks. I0428 11:10:53.795300 1 tasks_processing.go:69] worker 10 listening for tasks. I0428 11:10:53.795269 1 tasks_processing.go:69] worker 41 listening for tasks. I0428 11:10:53.795273 1 tasks_processing.go:69] worker 42 listening for tasks. I0428 11:10:53.795274 1 tasks_processing.go:69] worker 51 listening for tasks. I0428 11:10:53.795308 1 tasks_processing.go:69] worker 9 listening for tasks. I0428 11:10:53.795278 1 tasks_processing.go:69] worker 43 listening for tasks. I0428 11:10:53.795315 1 tasks_processing.go:69] worker 12 listening for tasks. I0428 11:10:53.795282 1 tasks_processing.go:69] worker 52 listening for tasks. I0428 11:10:53.795318 1 tasks_processing.go:69] worker 30 listening for tasks. I0428 11:10:53.795296 1 tasks_processing.go:69] worker 46 listening for tasks. I0428 11:10:53.795326 1 tasks_processing.go:69] worker 7 listening for tasks. I0428 11:10:53.795306 1 tasks_processing.go:69] worker 11 listening for tasks. I0428 11:10:53.795311 1 tasks_processing.go:69] worker 62 listening for tasks. I0428 11:10:53.795292 1 tasks_processing.go:69] worker 45 listening for tasks. I0428 11:10:53.795327 1 tasks_processing.go:69] worker 59 listening for tasks. I0428 11:10:53.795334 1 tasks_processing.go:69] worker 60 listening for tasks. I0428 11:10:53.795341 1 tasks_processing.go:69] worker 61 listening for tasks. I0428 11:10:53.795345 1 tasks_processing.go:69] worker 26 listening for tasks. I0428 11:10:53.795416 1 tasks_processing.go:69] worker 47 listening for tasks. I0428 11:10:53.795230 1 tasks_processing.go:71] worker 38 working on feature_gates task. I0428 11:10:53.795436 1 tasks_processing.go:69] worker 35 listening for tasks. I0428 11:10:53.795449 1 tasks_processing.go:69] worker 48 listening for tasks. I0428 11:10:53.795449 1 tasks_processing.go:69] worker 25 listening for tasks. I0428 11:10:53.795453 1 tasks_processing.go:69] worker 37 listening for tasks. I0428 11:10:53.795459 1 tasks_processing.go:69] worker 27 listening for tasks. I0428 11:10:53.795463 1 tasks_processing.go:69] worker 24 listening for tasks. I0428 11:10:53.795462 1 tasks_processing.go:71] worker 2 working on machine_autoscalers task. I0428 11:10:53.795468 1 tasks_processing.go:69] worker 63 listening for tasks. I0428 11:10:53.795468 1 tasks_processing.go:71] worker 1 working on support_secret task. I0428 11:10:53.795478 1 tasks_processing.go:69] worker 29 listening for tasks. I0428 11:10:53.795481 1 tasks_processing.go:69] worker 15 listening for tasks. I0428 11:10:53.795480 1 tasks_processing.go:69] worker 28 listening for tasks. I0428 11:10:53.795482 1 tasks_processing.go:69] worker 36 listening for tasks. I0428 11:10:53.795489 1 tasks_processing.go:69] worker 21 listening for tasks. I0428 11:10:53.795491 1 tasks_processing.go:69] worker 16 listening for tasks. I0428 11:10:53.795498 1 tasks_processing.go:69] worker 20 listening for tasks. I0428 11:10:53.795498 1 tasks_processing.go:69] worker 17 listening for tasks. I0428 11:10:53.795505 1 tasks_processing.go:69] worker 18 listening for tasks. I0428 11:10:53.795501 1 tasks_processing.go:71] worker 22 working on sap_datahubs task. I0428 11:10:53.795249 1 tasks_processing.go:69] worker 58 listening for tasks. I0428 11:10:53.795515 1 tasks_processing.go:69] worker 33 listening for tasks. I0428 11:10:53.795508 1 tasks_processing.go:69] worker 8 listening for tasks. I0428 11:10:53.795526 1 tasks_processing.go:71] worker 40 working on ceph_cluster task. I0428 11:10:53.795539 1 tasks_processing.go:71] worker 8 working on dvo_metrics task. I0428 11:10:53.795540 1 tasks_processing.go:71] worker 59 working on nodenetworkstates task. I0428 11:10:53.795547 1 tasks_processing.go:71] worker 55 working on image_registries task. I0428 11:10:53.795551 1 tasks_processing.go:71] worker 42 working on openshift_logging task. I0428 11:10:53.795587 1 tasks_processing.go:69] worker 32 listening for tasks. I0428 11:10:53.795604 1 tasks_processing.go:71] worker 60 working on aggregated_monitoring_cr_names task. I0428 11:10:53.795609 1 tasks_processing.go:71] worker 63 working on oauths task. I0428 11:10:53.795699 1 tasks_processing.go:71] worker 32 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0428 11:10:53.795750 1 tasks_processing.go:69] worker 13 listening for tasks. I0428 11:10:53.795768 1 tasks_processing.go:71] worker 13 working on config_maps task. I0428 11:10:53.795772 1 tasks_processing.go:69] worker 23 listening for tasks. I0428 11:10:53.795538 1 tasks_processing.go:71] worker 45 working on machine_sets task. I0428 11:10:53.795775 1 tasks_processing.go:69] worker 49 listening for tasks. I0428 11:10:53.795249 1 tasks_processing.go:71] worker 50 working on olm_operators task. I0428 11:10:53.795814 1 tasks_processing.go:71] worker 21 working on certificate_signing_requests task. I0428 11:10:53.795868 1 tasks_processing.go:71] worker 56 working on validating_webhook_configurations task. I0428 11:10:53.795992 1 tasks_processing.go:71] worker 48 working on version task. I0428 11:10:53.796180 1 tasks_processing.go:71] worker 52 working on openstack_dataplanenodesets task. I0428 11:10:53.796232 1 tasks_processing.go:71] worker 4 working on proxies task. I0428 11:10:53.796269 1 tasks_processing.go:71] worker 49 working on cost_management_metrics_configs task. I0428 11:10:53.796292 1 tasks_processing.go:71] worker 37 working on install_plans task. I0428 11:10:53.796410 1 tasks_processing.go:71] worker 54 working on cluster_apiserver task. I0428 11:10:53.796431 1 tasks_processing.go:71] worker 23 working on sap_config task. I0428 11:10:53.795511 1 tasks_processing.go:69] worker 34 listening for tasks. I0428 11:10:53.796488 1 tasks_processing.go:71] worker 61 working on machines task. I0428 11:10:53.796523 1 tasks_processing.go:71] worker 25 working on nodenetworkconfigurationpolicies task. I0428 11:10:53.796576 1 tasks_processing.go:71] worker 30 working on active_alerts task. W0428 11:10:53.796647 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:10:53.796679 1 tasks_processing.go:71] worker 30 working on lokistack task. I0428 11:10:53.796688 1 tasks_processing.go:71] worker 41 working on networks task. I0428 11:10:53.796760 1 tasks_processing.go:71] worker 47 working on mutating_webhook_configurations task. I0428 11:10:53.796865 1 tasks_processing.go:71] worker 46 working on nodes task. I0428 11:10:53.796902 1 tasks_processing.go:71] worker 35 working on qemu_kubevirt_launcher_logs task. I0428 11:10:53.796912 1 tasks_processing.go:71] worker 17 working on operators_pods_and_events task. I0428 11:10:53.795523 1 tasks_processing.go:71] worker 39 working on authentication task. I0428 11:10:53.795535 1 tasks_processing.go:71] worker 33 working on schedulers task. I0428 11:10:53.796209 1 tasks_processing.go:71] worker 14 working on metrics task. I0428 11:10:53.796929 1 tasks_processing.go:71] worker 9 working on jaegers task. I0428 11:10:53.796941 1 tasks_processing.go:71] worker 11 working on crds task. I0428 11:10:53.796944 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 49.409µs to process 0 records I0428 11:10:53.796960 1 tasks_processing.go:71] worker 58 working on container_runtime_configs task. W0428 11:10:53.796964 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:10:53.796981 1 tasks_processing.go:71] worker 14 working on infrastructures task. I0428 11:10:53.797079 1 tasks_processing.go:71] worker 62 working on image task. I0428 11:10:53.797163 1 tasks_processing.go:71] worker 36 working on pdbs task. I0428 11:10:53.796218 1 tasks_processing.go:71] worker 53 working on ingress task. I0428 11:10:53.797966 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 41.484µs to process 0 records I0428 11:10:53.798027 1 tasks_processing.go:71] worker 16 working on silenced_alerts task. W0428 11:10:53.798061 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:10:53.798075 1 tasks_processing.go:71] worker 16 working on openstack_version task. I0428 11:10:53.798322 1 tasks_processing.go:71] worker 20 working on sap_pods task. I0428 11:10:53.796222 1 tasks_processing.go:71] worker 57 working on openstack_dataplanedeployments task. I0428 11:10:53.798705 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 32.35µs to process 0 records I0428 11:10:53.796874 1 tasks_processing.go:71] worker 24 working on service_accounts task. I0428 11:10:53.795474 1 tasks_processing.go:69] worker 19 listening for tasks. I0428 11:10:53.799074 1 tasks_processing.go:74] worker 19 stopped. I0428 11:10:53.796507 1 tasks_processing.go:71] worker 10 working on node_logs task. I0428 11:10:53.796480 1 tasks_processing.go:71] worker 26 working on operators task. I0428 11:10:53.796709 1 tasks_processing.go:71] worker 15 working on storage_classes task. I0428 11:10:53.796715 1 tasks_processing.go:71] worker 29 working on machine_config_pools task. I0428 11:10:53.796559 1 tasks_processing.go:71] worker 27 working on pod_network_connectivity_checks task. I0428 11:10:53.799463 1 tasks_processing.go:74] worker 22 stopped. I0428 11:10:53.799481 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 3.939798ms to process 0 records I0428 11:10:53.796499 1 tasks_processing.go:71] worker 34 working on clusterroles task. I0428 11:10:53.796915 1 tasks_processing.go:71] worker 28 working on monitoring_persistent_volumes task. I0428 11:10:53.796920 1 tasks_processing.go:71] worker 18 working on machine_configs task. I0428 11:10:53.796924 1 tasks_processing.go:71] worker 51 working on container_images task. I0428 11:10:53.796925 1 tasks_processing.go:71] worker 43 working on machine_healthchecks task. I0428 11:10:53.796929 1 tasks_processing.go:71] worker 12 working on image_pruners task. I0428 11:10:53.795509 1 tasks_processing.go:69] worker 31 listening for tasks. I0428 11:10:53.796570 1 tasks_processing.go:71] worker 7 working on storage_cluster task. I0428 11:10:53.799603 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 4.272851ms to process 0 records I0428 11:10:53.799613 1 tasks_processing.go:74] worker 6 stopped. I0428 11:10:53.796731 1 tasks_processing.go:71] worker 5 working on openshift_machine_api_events task. I0428 11:10:53.799770 1 tasks_processing.go:74] worker 31 stopped. I0428 11:10:53.796744 1 tasks_processing.go:71] worker 44 working on tsdb_status task. I0428 11:10:53.800289 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0428 11:10:53.800318 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0428 11:10:53.800326 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0428 11:10:53.800330 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0428 11:10:53.800354 1 controller.go:489] The operator is still being initialized I0428 11:10:53.800366 1 controller.go:512] The operator is healthy W0428 11:10:53.800503 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:10:53.800522 1 tasks_processing.go:74] worker 44 stopped. I0428 11:10:53.800532 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 707.636µs to process 0 records I0428 11:10:53.802611 1 tasks_processing.go:74] worker 59 stopped. I0428 11:10:53.802623 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 7.055872ms to process 0 records I0428 11:10:53.803253 1 tasks_processing.go:74] worker 38 stopped. I0428 11:10:53.803386 1 recorder.go:75] Recording config/featuregate with fingerprint=b7d16d115238807835fd556fbb96dd8e486391443c25a05246c1e7bfd17b6fd7 I0428 11:10:53.803399 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 7.8146ms to process 1 records I0428 11:10:53.803551 1 tasks_processing.go:74] worker 3 stopped. I0428 11:10:53.803577 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0428 11:10:53.803588 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 8.288164ms to process 1 records I0428 11:10:53.812381 1 tasks_processing.go:74] worker 14 stopped. I0428 11:10:53.813165 1 recorder.go:75] Recording config/infrastructure with fingerprint=c30e741ca2155f497c55e3653b0a3ba236a12ee642cf183853ef95073925b621 I0428 11:10:53.813181 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 15.38265ms to process 1 records W0428 11:10:53.816061 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0428 11:10:53.819237 1 tasks_processing.go:74] worker 36 stopped. I0428 11:10:53.819346 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=ca096baa1e4df2f0ceeab6f0df4cb7049679277d142c5e20e6460201d961638c I0428 11:10:53.819366 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=4164ddc8df810a88b9dfece90935348d69297031f041dad3195b64ccf7b86f1e I0428 11:10:53.819381 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=63d39f3e97ecfd7a841fb6eaf460e66ff259bbe65296bab473320d129e3f6de9 I0428 11:10:53.819387 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 22.044924ms to process 3 records I0428 11:10:53.823871 1 tasks_processing.go:74] worker 57 stopped. I0428 11:10:53.823887 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 25.254917ms to process 0 records I0428 11:10:53.823912 1 tasks_processing.go:74] worker 40 stopped. I0428 11:10:53.823928 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 28.366467ms to process 0 records I0428 11:10:53.823947 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 27.677537ms to process 0 records I0428 11:10:53.823953 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 27.660467ms to process 0 records I0428 11:10:53.823958 1 tasks_processing.go:74] worker 49 stopped. I0428 11:10:53.823962 1 tasks_processing.go:74] worker 52 stopped. I0428 11:10:53.824002 1 tasks_processing.go:74] worker 23 stopped. I0428 11:10:53.824014 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 27.559259ms to process 0 records I0428 11:10:53.824022 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 27.31421ms to process 0 records I0428 11:10:53.824032 1 tasks_processing.go:74] worker 30 stopped. I0428 11:10:53.824151 1 tasks_processing.go:74] worker 61 stopped. E0428 11:10:53.824162 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0428 11:10:53.824172 1 gather.go:177] gatherer "clusterconfig" function "machines" took 27.606023ms to process 0 records I0428 11:10:53.824202 1 tasks_processing.go:74] worker 42 stopped. I0428 11:10:53.824213 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 28.644812ms to process 0 records I0428 11:10:53.824364 1 tasks_processing.go:74] worker 7 stopped. I0428 11:10:53.824376 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 24.581563ms to process 0 records I0428 11:10:53.824389 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 28.900983ms to process 0 records I0428 11:10:53.824395 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 26.040946ms to process 0 records I0428 11:10:53.824401 1 tasks_processing.go:74] worker 20 stopped. I0428 11:10:53.824400 1 tasks_processing.go:74] worker 2 stopped. I0428 11:10:53.824607 1 tasks_processing.go:74] worker 46 stopped. I0428 11:10:53.825061 1 recorder.go:75] Recording config/node/ip-10-0-0-81.ec2.internal with fingerprint=8bddadca4eca1860502103c9dee20e9f184cc2235ef8425adfed333aa3ee0a8d I0428 11:10:53.825175 1 recorder.go:75] Recording config/node/ip-10-0-1-103.ec2.internal with fingerprint=9cd04874c5525a10b95252a7011bc08f44a199bb4a480c15a06906b46bca7d8c I0428 11:10:53.825269 1 recorder.go:75] Recording config/node/ip-10-0-2-198.ec2.internal with fingerprint=b67f1f578623dbc1dcc680cd1a5f2d625822c2245a49c50404bf57f96339f4e6 I0428 11:10:53.825285 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 27.696076ms to process 3 records I0428 11:10:53.825322 1 tasks_processing.go:74] worker 55 stopped. I0428 11:10:53.825970 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=0a2b9743c8fd1e593baa1c19c10441aedbbd8f627906494c9ff29eb01e0e0b5c I0428 11:10:53.825984 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 29.760988ms to process 1 records I0428 11:10:53.828388 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0428 11:10:53.828455 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0428 11:10:53.828524 1 operator.go:288] started I0428 11:10:53.828550 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0428 11:10:53.835767 1 tasks_processing.go:74] worker 1 stopped. E0428 11:10:53.835783 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0428 11:10:53.835792 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 40.280425ms to process 0 records I0428 11:10:53.836117 1 tasks_processing.go:74] worker 62 stopped. I0428 11:10:53.836619 1 recorder.go:75] Recording config/image with fingerprint=073e0fa780aaf48bb3fa9e09f6abf8c01fe75ecfe21a5a20a3d95286a64a62bf I0428 11:10:53.836654 1 gather.go:177] gatherer "clusterconfig" function "image" took 38.933107ms to process 1 records I0428 11:10:53.836998 1 recorder.go:75] Recording config/authentication with fingerprint=720ba9608cdaadde819abfb594a26376bd13a4a16fec8eb3c75a0b6d95200da7 I0428 11:10:53.837017 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 38.255449ms to process 1 records I0428 11:10:53.837109 1 recorder.go:75] Recording config/proxy with fingerprint=c8c36c93933ddc877ef1d1c463917ab8171810dd18317571243f78239e5bced3 I0428 11:10:53.837142 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 39.886736ms to process 1 records I0428 11:10:53.837395 1 tasks_processing.go:74] worker 39 stopped. I0428 11:10:53.837409 1 tasks_processing.go:74] worker 4 stopped. I0428 11:10:53.837519 1 recorder.go:75] Recording config/oauth with fingerprint=00886f1dc5790e9e18d537f20b8daf97510d6bf58409e4fecae5a259bb8a916f I0428 11:10:53.837535 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 40.814598ms to process 1 records I0428 11:10:53.837726 1 tasks_processing.go:74] worker 63 stopped. I0428 11:10:53.837816 1 recorder.go:75] Recording config/apiserver with fingerprint=2117584cc63deeadeec8c8f37302ea5f91c4155989a211c2c3bc99fa7e183fe2 I0428 11:10:53.837833 1 tasks_processing.go:74] worker 54 stopped. I0428 11:10:53.837842 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 40.04643ms to process 1 records I0428 11:10:53.837896 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 39.716814ms to process 0 records I0428 11:10:53.837975 1 tasks_processing.go:74] worker 16 stopped. I0428 11:10:53.840075 1 tasks_processing.go:74] worker 9 stopped. I0428 11:10:53.840085 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 43.127139ms to process 0 records E0428 11:10:53.840107 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0428 11:10:53.840113 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 40.61806ms to process 0 records I0428 11:10:53.840119 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 43.122928ms to process 0 records I0428 11:10:53.840124 1 tasks_processing.go:74] worker 58 stopped. I0428 11:10:53.840124 1 tasks_processing.go:74] worker 27 stopped. I0428 11:10:53.855310 1 tasks_processing.go:74] worker 45 stopped. I0428 11:10:53.855325 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 59.520714ms to process 0 records I0428 11:10:53.855334 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 55.56503ms to process 0 records I0428 11:10:53.855340 1 tasks_processing.go:74] worker 5 stopped. I0428 11:10:53.855505 1 tasks_processing.go:74] worker 25 stopped. I0428 11:10:53.855523 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 58.92188ms to process 0 records I0428 11:10:53.855578 1 tasks_processing.go:74] worker 43 stopped. E0428 11:10:53.855594 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0428 11:10:53.855602 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 55.840153ms to process 0 records I0428 11:10:53.855686 1 tasks_processing.go:74] worker 53 stopped. I0428 11:10:53.855763 1 recorder.go:75] Recording config/ingress with fingerprint=a1a3a745dfb50fbb7b521f9a481d75ffd667aefc92f36c21efb89f1189206818 I0428 11:10:53.855779 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 57.714805ms to process 1 records I0428 11:10:53.855910 1 tasks_processing.go:74] worker 15 stopped. I0428 11:10:53.855995 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=f99a2aa98f658637b71f7b834b553089462aaf828cc28955700f03ebb173bc39 I0428 11:10:53.856018 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=727cbcd4c0e0c406d3a3942bc5c72fa221fd14f97f0890b1fb43dbe92658c9ca I0428 11:10:53.856028 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 56.636483ms to process 2 records I0428 11:10:53.856192 1 tasks_processing.go:74] worker 12 stopped. I0428 11:10:53.856335 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=4db0b2c37074ec985e4612579e959071d833ede6858913972360b726e1cda455 I0428 11:10:53.856353 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 56.447795ms to process 1 records I0428 11:10:53.857286 1 tasks_processing.go:74] worker 41 stopped. I0428 11:10:53.857398 1 recorder.go:75] Recording config/network with fingerprint=b182006587ed146f1448da1338d0361078dc3f30633cc4f1c0a5ea15702a0ae4 I0428 11:10:53.857410 1 gather.go:177] gatherer "clusterconfig" function "networks" took 60.519761ms to process 1 records I0428 11:10:53.857487 1 tasks_processing.go:74] worker 47 stopped. I0428 11:10:53.857539 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=35acd26a5d297fa6587c157e672cdb099be40eb895f994021c31a5ee9a1d3dba I0428 11:10:53.857587 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=437a3eda6cd0dab50f7007da14dd61c639dfb1e18770e187182accf210491403 I0428 11:10:53.857630 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=2300ee99ecc7b532af8e585e38c170834c2e10621e33e689f77e7ef7cca818d6 I0428 11:10:53.857639 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 60.575899ms to process 3 records I0428 11:10:53.857648 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 58.081104ms to process 0 records I0428 11:10:53.857661 1 tasks_processing.go:74] worker 29 stopped. I0428 11:10:53.857693 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=f678c6bbef7b944d36a20d9ad6ae17dbab6231f009a36b3328dfbee021355b2b I0428 11:10:53.857701 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 60.350324ms to process 1 records I0428 11:10:53.857705 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 58.026435ms to process 0 records I0428 11:10:53.857704 1 tasks_processing.go:74] worker 33 stopped. I0428 11:10:53.857711 1 tasks_processing.go:74] worker 28 stopped. I0428 11:10:53.857767 1 gather_logs.go:145] no pods in namespace were found I0428 11:10:53.857781 1 tasks_processing.go:74] worker 35 stopped. I0428 11:10:53.857787 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 60.863032ms to process 0 records I0428 11:10:53.858542 1 tasks_processing.go:74] worker 56 stopped. I0428 11:10:53.858633 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0428 11:10:53.858644 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0428 11:10:53.858647 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0428 11:10:53.858651 1 controller.go:212] Source scaController *sca.Controller is not ready I0428 11:10:53.858654 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0428 11:10:53.858670 1 controller.go:489] The operator is still being initialized I0428 11:10:53.858678 1 controller.go:512] The operator is healthy I0428 11:10:53.858733 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=8897d370a81fce78e254725e4e3ca1c0e68d1d5901cd621e9124f346b8480738 I0428 11:10:53.858870 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=5bbf92aa9987a14291fe27296a0cbea3f2c1db9c2acb1e5eab3bdc16229e0d8f I0428 11:10:53.858902 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=cec656a78667960743f2899fd4c5dd7a494e05688fafb05ff3f199c28c5fe5f4 I0428 11:10:53.858967 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=fa8df1837648e8048ffd45c44040111d6d43fccedb08b4f20bd2006b4fc8cd7e I0428 11:10:53.859025 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=e6ab36223386a19fa3b7732766365db1ce60c732ca70ff8ef8d050c03e6e586b I0428 11:10:53.859089 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=78f8375ad1e5f4d7bdcb488a5896a99584a2034bbe1bcd882651388d40e2e42f I0428 11:10:53.859158 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-network-operator-validation with fingerprint=308de0196b750c6a5e6fcd34a358f41215cd2d91ce32af70542f9d7a45767054 I0428 11:10:53.859231 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=0b447e486a10b41f387f34b4f13a41e8581052dcc78dfae0cb96a8177a6d929a I0428 11:10:53.859279 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=a8bca54d6cfbc2a037071a05f6e7396d34445504a6fd52a2b40fdde41be1e59d I0428 11:10:53.859337 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=1da7ea905f1eb64408376067ed12d8bbfed35e7eb5177fceb68ec218f7ee9a8e I0428 11:10:53.859393 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=25b9c4c2e3f0135b092a52902de8fbbc59616134d410a113ac19b4e1c4e1d0a0 I0428 11:10:53.859410 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 62.615488ms to process 11 records I0428 11:10:53.861137 1 tasks_processing.go:74] worker 21 stopped. I0428 11:10:53.861157 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 65.305104ms to process 0 records I0428 11:10:53.864431 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0428 11:10:53.864442 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0428 11:10:53.864467 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0428 11:10:53.868113 1 base_controller.go:82] Caches are synced for ConfigController I0428 11:10:53.868126 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0428 11:10:53.880627 1 prometheus_rules.go:88] Prometheus rules successfully created I0428 11:10:53.885404 1 tasks_processing.go:74] worker 34 stopped. I0428 11:10:53.885554 1 configmapobserver.go:84] configmaps "insights-config" not found I0428 11:10:53.885579 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=2eb43fb3c3a84f825c2d02a560035b0439dd8d1294c360fa32c28c4b52abced6 I0428 11:10:53.885675 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=d24b3330dcef4fddd749157a61ab7fb8074989e3460c30f3086d8251956e53f0 I0428 11:10:53.885686 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 85.685807ms to process 2 records I0428 11:10:53.885975 1 tasks_processing.go:74] worker 11 stopped. I0428 11:10:53.886552 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=442d25708c77138e6ec5179c4fa15e312d1cbdc8f8396a70246d07736e13668b I0428 11:10:53.886834 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=f29e70667fa8055792c327e67a61f2844674859c1066d873af4c902db453cc36 I0428 11:10:53.886850 1 gather.go:177] gatherer "clusterconfig" function "crds" took 89.01691ms to process 2 records I0428 11:10:53.886903 1 recorder.go:75] Recording config/olm_operators with fingerprint=5610ee69fdd591e058d4977b34e90b3a8ebc9b0ae4d30095937b6eabf67494bf I0428 11:10:53.886913 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 90.475221ms to process 1 records I0428 11:10:53.886921 1 tasks_processing.go:74] worker 50 stopped. E0428 11:10:53.892559 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%277d71e732-364e-42c1-bc45-a40669a21c59%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.16:56585->172.30.0.10:53: read: connection refused I0428 11:10:53.892628 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%277d71e732-364e-42c1-bc45-a40669a21c59%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.16:56585->172.30.0.10:53: read: connection refused I0428 11:10:53.895719 1 tasks_processing.go:74] worker 60 stopped. I0428 11:10:53.895736 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 100.10113ms to process 0 records I0428 11:10:53.900313 1 tasks_processing.go:74] worker 32 stopped. I0428 11:10:53.900326 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 104.588061ms to process 0 records I0428 11:10:53.914079 1 tasks_processing.go:74] worker 13 stopped. E0428 11:10:53.914108 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0428 11:10:53.914116 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0428 11:10:53.914120 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0428 11:10:53.914130 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0428 11:10:53.914154 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0428 11:10:53.914162 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0428 11:10:53.914166 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0428 11:10:53.914171 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0428 11:10:53.914221 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0428 11:10:53.914229 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0428 11:10:53.914234 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 118.295141ms to process 7 records I0428 11:10:53.915289 1 tasks_processing.go:74] worker 10 stopped. I0428 11:10:53.915300 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 116.195077ms to process 0 records I0428 11:10:53.928847 1 base_controller.go:82] Caches are synced for LoggingSyncer I0428 11:10:53.928859 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0428 11:10:53.936430 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0428 11:10:53.939669 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.16:49345->172.30.0.10:53: read: connection refused I0428 11:10:53.939681 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.16:49345->172.30.0.10:53: read: connection refused I0428 11:10:53.940720 1 tasks_processing.go:74] worker 48 stopped. I0428 11:10:53.940979 1 recorder.go:75] Recording config/version with fingerprint=4dad20f9560c8663cbfcdcbb834ff2cae7f61054b97499b65f8944c3533a1090 I0428 11:10:53.940992 1 recorder.go:75] Recording config/id with fingerprint=218dd3d264b37d94a85dc30e68469a029e1873286128f4ff9b0c71d7a7cb7495 I0428 11:10:53.940998 1 gather.go:177] gatherer "clusterconfig" function "version" took 144.71155ms to process 2 records I0428 11:10:53.960694 1 tasks_processing.go:74] worker 51 stopped. I0428 11:10:53.960770 1 recorder.go:75] Recording config/running_containers with fingerprint=9400f2931e3adfd9406e91e80aeec992bf0b6827b0d783daa469c4f759c4d807 I0428 11:10:53.960788 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 160.947182ms to process 1 records I0428 11:10:53.969690 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0428 11:10:53.975114 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0428 11:10:53.986629 1 tasks_processing.go:74] worker 0 stopped. E0428 11:10:53.986653 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0428 11:10:53.986662 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2pv452d4je1stus611mdg5b7ospocvkj-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2pv452d4je1stus611mdg5b7ospocvkj-primary-cert-bundle-secret" not found I0428 11:10:53.986716 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=793a3ff90dbc7f1ff3aec4fc3dd3eeb0d19fcce4c4af0ef79bac77dc0f219092 I0428 11:10:53.986729 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 191.390983ms to process 1 records I0428 11:10:54.015674 1 tasks_processing.go:74] worker 18 stopped. I0428 11:10:54.015701 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0428 11:10:54.015711 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 215.981375ms to process 1 records I0428 11:10:54.269905 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0428 11:10:54.269918 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0428 11:10:54.270354 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-89g47 pod in namespace openshift-dns (previous: false). I0428 11:10:54.500235 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-89g47 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-89g47\" is waiting to start: ContainerCreating" I0428 11:10:54.500250 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-89g47\" is waiting to start: ContainerCreating" I0428 11:10:54.500258 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-89g47 pod in namespace openshift-dns (previous: false). I0428 11:10:54.673410 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-89g47 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-89g47\" is waiting to start: ContainerCreating" I0428 11:10:54.673424 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-89g47\" is waiting to start: ContainerCreating" I0428 11:10:54.673434 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-khzt4 pod in namespace openshift-dns (previous: false). W0428 11:10:54.815752 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0428 11:10:54.901543 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-khzt4 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-khzt4\" is waiting to start: ContainerCreating" I0428 11:10:54.901562 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-khzt4\" is waiting to start: ContainerCreating" I0428 11:10:54.901575 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-khzt4 pod in namespace openshift-dns (previous: false). I0428 11:10:55.071537 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-khzt4 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-khzt4\" is waiting to start: ContainerCreating" I0428 11:10:55.071551 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-khzt4\" is waiting to start: ContainerCreating" I0428 11:10:55.071581 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-rw7zm pod in namespace openshift-dns (previous: false). I0428 11:10:55.292502 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rw7zm pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-rw7zm\" is waiting to start: ContainerCreating" I0428 11:10:55.292517 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-rw7zm\" is waiting to start: ContainerCreating" I0428 11:10:55.292525 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-rw7zm pod in namespace openshift-dns (previous: false). I0428 11:10:55.470190 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rw7zm pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-rw7zm\" is waiting to start: ContainerCreating" I0428 11:10:55.470204 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-rw7zm\" is waiting to start: ContainerCreating" I0428 11:10:55.470214 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-5sshx pod in namespace openshift-dns (previous: false). I0428 11:10:55.503612 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0428 11:10:55.672838 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:55.672854 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-prqk9 pod in namespace openshift-dns (previous: false). W0428 11:10:55.815945 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0428 11:10:55.871535 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:55.871553 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-zpqxk pod in namespace openshift-dns (previous: false). I0428 11:10:56.076231 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:56.076287 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-588dc4b66b-l68w9 pod in namespace openshift-image-registry (previous: false). I0428 11:10:56.107090 1 tasks_processing.go:74] worker 26 stopped. I0428 11:10:56.107271 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=73d57cf29cb9058f630542475e822ce19ba5e26392289f40c005a230a5c4df95 I0428 11:10:56.107352 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=4a70b312a384aa00a9875de59f16495dddae0fd4dbf8795cd265b534af83feb4 I0428 11:10:56.107430 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0428 11:10:56.107501 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=53b67dc8d620b81f07ed2bddc3966788587838864729dd0d896ced5f4996fe89 I0428 11:10:56.107548 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0428 11:10:56.107648 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=38aecfb387905bf99124df62faff3cdc8d9e7f32c47a0087d8293955a93cf55e I0428 11:10:56.107712 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=b17ded1262fc038000402565f9d241a851ec810b3787b5856f9660d52b163297 I0428 11:10:56.107790 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=952233e939bf2ec2c7f547fa4c41c05c514f1b8f52cdf51e84032f887a2050bf I0428 11:10:56.107863 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=6f394a03b7fb5226a6f41d54835073e1c1ec400fca6e97bfcc2bdb82a4b93c7a I0428 11:10:56.107896 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=e5ff11d57817f84a678f6fa9565af55bd1120227c16a21933637ab62675a6d70 I0428 11:10:56.107950 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=b713978cfba68e789def4b80cba7f70126a08418f151b4f6c6fbf547f6125f00 I0428 11:10:56.107982 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0428 11:10:56.108026 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=4ff456ba6cd4727f4a9c7d7cf479895e6f1d994b93e3f9d1f13931aafcfb5191 I0428 11:10:56.108074 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0428 11:10:56.108514 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=f501fba5d71260eb7174dd78be8b386d6703e428e8178f0e821631d86ac64929 I0428 11:10:56.108545 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0428 11:10:56.108567 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=88e3c3ce8ba376e65d1f8523ad51c6c872d3f98871fdac513bcc7bb4749984da I0428 11:10:56.108578 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0428 11:10:56.108596 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=8702843c8850b01e73f96615eb0c8adec3b553e90130ee5d7c1fe5ce9e902aff I0428 11:10:56.108766 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=49e9c6c6921943878a2d670e5b8a7779c8077dd22a12573f9867881d61a42447 I0428 11:10:56.108778 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0428 11:10:56.108786 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0428 11:10:56.108809 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0428 11:10:56.108832 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=fe67a61eecce03919d09852330afdd03ceb71fe7f0d402b83b333ba0adb198bf I0428 11:10:56.108857 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=b841f8318a12eaac3d522cc33c67c20c07818ca3fcd92702860c9c36265b7dd8 I0428 11:10:56.108868 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0428 11:10:56.108883 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=1ac47b71d38089d524925da5ed722927b62343999f0dd4f10d873f365558c411 I0428 11:10:56.108892 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0428 11:10:56.108905 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=33773c37dd371b221d43e44ff859588484367dbe8045270f40c54a7a807789c7 I0428 11:10:56.108920 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=c87ef23987b6057e937882c2e1b9c95f02e528023d74c404caeb7f738e00f3c1 I0428 11:10:56.108933 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=e1302135765d17a5eaaf682e00f9690e1b57d22888d2a91909d45d4f79eeeddd I0428 11:10:56.108949 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=4f20f28d86f761d07cc0bbdf62e98e1ea95a263a4e628069a448fbb2e7249b94 I0428 11:10:56.108968 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=87b927c6b5fd45ccf13b26316ee335c132a2c87349cdb4f01d577148caeee4dd I0428 11:10:56.108978 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0428 11:10:56.109007 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=f02a421e6ece94c717249def7dcac7f42a2c0a6a25c5f328710df2a8eab7bfa3 I0428 11:10:56.109023 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0428 11:10:56.109032 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0428 11:10:56.109040 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.307964807s to process 37 records I0428 11:10:56.272156 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-588dc4b66b-l68w9 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-588dc4b66b-l68w9\" is waiting to start: ContainerCreating" I0428 11:10:56.272173 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-588dc4b66b-l68w9\" is waiting to start: ContainerCreating" I0428 11:10:56.272210 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7946545b77-66qjn pod in namespace openshift-image-registry (previous: false). I0428 11:10:56.481273 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7946545b77-66qjn pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7946545b77-66qjn\" is waiting to start: ContainerCreating" I0428 11:10:56.481294 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7946545b77-66qjn\" is waiting to start: ContainerCreating" I0428 11:10:56.481327 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-7946545b77-868tb pod in namespace openshift-image-registry (previous: false). I0428 11:10:56.670625 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-7946545b77-868tb pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-7946545b77-868tb\" is waiting to start: ContainerCreating" I0428 11:10:56.670643 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-7946545b77-868tb\" is waiting to start: ContainerCreating" I0428 11:10:56.670654 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-45rbz pod in namespace openshift-image-registry (previous: false). W0428 11:10:56.816144 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0428 11:10:56.876390 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:56.876408 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-cdw6c pod in namespace openshift-image-registry (previous: false). I0428 11:10:57.070337 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:57.070355 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-lj6dr pod in namespace openshift-image-registry (previous: false). I0428 11:10:57.273303 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0428 11:10:57.273328 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5cd7958c4c-wlq5w pod in namespace openshift-ingress (previous: false). I0428 11:10:57.482191 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5cd7958c4c-wlq5w pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5cd7958c4c-wlq5w\" is waiting to start: ContainerCreating" I0428 11:10:57.482210 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5cd7958c4c-wlq5w\" is waiting to start: ContainerCreating" I0428 11:10:57.482222 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-8675d896b6-f5dxp pod in namespace openshift-ingress (previous: false). I0428 11:10:57.675313 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-8675d896b6-f5dxp pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-8675d896b6-f5dxp\" is waiting to start: ContainerCreating" I0428 11:10:57.675336 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-8675d896b6-f5dxp\" is waiting to start: ContainerCreating" I0428 11:10:57.675352 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-8675d896b6-ts4dx pod in namespace openshift-ingress (previous: false). W0428 11:10:57.816348 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0428 11:10:57.869976 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-8675d896b6-ts4dx pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-8675d896b6-ts4dx\" is waiting to start: ContainerCreating" I0428 11:10:57.869994 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-8675d896b6-ts4dx\" is waiting to start: ContainerCreating" I0428 11:10:57.870005 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-swrnh pod in namespace openshift-ingress-canary (previous: false). I0428 11:10:58.073086 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-swrnh pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-swrnh\" is waiting to start: ContainerCreating" I0428 11:10:58.073115 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-swrnh\" is waiting to start: ContainerCreating" I0428 11:10:58.073142 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-x2bqr pod in namespace openshift-ingress-canary (previous: false). I0428 11:10:58.271729 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-x2bqr pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-x2bqr\" is waiting to start: ContainerCreating" I0428 11:10:58.271746 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-x2bqr\" is waiting to start: ContainerCreating" I0428 11:10:58.271774 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-zp9ml pod in namespace openshift-ingress-canary (previous: false). I0428 11:10:58.473671 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-zp9ml pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-zp9ml\" is waiting to start: ContainerCreating" I0428 11:10:58.473687 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-zp9ml\" is waiting to start: ContainerCreating" I0428 11:10:58.473703 1 tasks_processing.go:74] worker 17 stopped. I0428 11:10:58.473795 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=94929633da54d3543f4f0b5430fe6becab300e7bc262cb28008e32e2a44e4ca9 I0428 11:10:58.473853 1 recorder.go:75] Recording events/openshift-dns with fingerprint=2f10802d7f0fe2418c1747074c9ef8516048e706f87b4ea9761f5716ceefee60 I0428 11:10:58.473952 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=39f3b51e1b11d2a64605fb76f8c07d36ece26c70adb6a0d39b26eda61e752386 I0428 11:10:58.473989 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=b94e2f165e7b775cd678ed825737d1f1a56abd0956d06c47db1424ef93906d9d I0428 11:10:58.474043 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=4b527d18a79251e5871806322749369deafe0cdd829fc8fe818bcd29c3515942 I0428 11:10:58.474070 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=34dfbb474e750c4ed70abb932f8f3f5723887b5d8661ed9f34321279f62f620b I0428 11:10:58.475453 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-89g47 with fingerprint=dc570bdfa05d87956ab1490b2f372b8312617d0d9551fbad3793110332d6f373 I0428 11:10:58.475570 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-rw7zm with fingerprint=3fc0dbe5ac8cffa43151544f047bd664ec81aa63d38ecbd05148a9f3cafca430 I0428 11:10:58.475687 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-588dc4b66b-l68w9 with fingerprint=fe0f25400a192d31b3a52e13aa9debedc18d41168503d3673d97d82036ecbc51 I0428 11:10:58.475798 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7946545b77-66qjn with fingerprint=421dfc983d65ea0e721aa9fd2f474abce6be2b930d057200c2a90129cc27b7e5 I0428 11:10:58.475913 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-7946545b77-868tb with fingerprint=18c8f071d242471a22c2a1472a93bd48f4a55d896962b5bb7842a441053c597e I0428 11:10:58.475985 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-x2bqr with fingerprint=aa21a0561ef264dd8cf57e38612fdb2d6d04bba854014d9e47ebcc03344b371a I0428 11:10:58.476054 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-zp9ml with fingerprint=0ee9509a4e47c8ecc28ba70bdaea76c0c82acf252799edbe19e9f66889db03c6 I0428 11:10:58.476067 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.676568912s to process 13 records W0428 11:10:58.815921 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0428 11:10:58.815948 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0428 11:10:58.815963 1 tasks_processing.go:74] worker 8 stopped. E0428 11:10:58.815974 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0428 11:10:58.815987 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0428 11:10:58.816007 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0428 11:10:58.816026 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.020413748s to process 1 records I0428 11:11:06.463291 1 tasks_processing.go:74] worker 37 stopped. I0428 11:11:06.463334 1 recorder.go:75] Recording config/installplans with fingerprint=95dfd5f33a9a46199239158a0ec0183b629818eca24ae41f4d706d7a3d604aec I0428 11:11:06.463345 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.666941591s to process 1 records I0428 11:11:06.687734 1 configmapobserver.go:84] configmaps "insights-config" not found I0428 11:11:07.205272 1 tasks_processing.go:74] worker 24 stopped. I0428 11:11:07.205712 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=5bfe903b1da30878a989600cf95950087bbfed0fc0418c904f1a71b56765f5ff I0428 11:11:07.205733 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.406392462s to process 1 records E0428 11:11:07.205807 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.41s with: function \"machines\" failed with an error, function \"support_secret\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"machine_healthchecks\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0428 11:11:07.206912 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machines" failed with an error, function "support_secret" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machine_healthchecks" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0428 11:11:07.206926 1 periodic.go:209] Running workloads gatherer I0428 11:11:07.206940 1 tasks_processing.go:45] number of workers: 2 I0428 11:11:07.206947 1 tasks_processing.go:69] worker 1 listening for tasks. I0428 11:11:07.206951 1 tasks_processing.go:71] worker 1 working on workload_info task. I0428 11:11:07.207018 1 tasks_processing.go:69] worker 0 listening for tasks. I0428 11:11:07.207121 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0428 11:11:07.232757 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0428 11:11:07.235338 1 tasks_processing.go:74] worker 0 stopped. I0428 11:11:07.235351 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 28.195347ms to process 0 records I0428 11:11:07.243499 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (12ms) I0428 11:11:07.252257 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (9ms) I0428 11:11:07.260612 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (8ms) I0428 11:11:07.269347 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (9ms) I0428 11:11:07.277112 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (8ms) I0428 11:11:07.284493 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (7ms) I0428 11:11:07.291913 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (7ms) I0428 11:11:07.303958 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (12ms) I0428 11:11:07.312357 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (8ms) I0428 11:11:07.320661 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (8ms) I0428 11:11:07.341474 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (21ms) I0428 11:11:07.441842 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (100ms) I0428 11:11:07.541583 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (100ms) I0428 11:11:07.641822 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (100ms) I0428 11:11:07.741824 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (100ms) I0428 11:11:07.841887 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (100ms) I0428 11:11:07.940876 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (99ms) I0428 11:11:08.041185 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (100ms) I0428 11:11:08.141347 1 gather_workloads_info.go:387] No image sha256:ae7d3453fd734ecc865e5f9bb16f29244ebffe6291b77e1d4e496f71eb053174 (100ms) I0428 11:11:08.241685 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (100ms) I0428 11:11:08.341918 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (100ms) I0428 11:11:08.446167 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (104ms) I0428 11:11:08.545189 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (99ms) I0428 11:11:08.643130 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (98ms) I0428 11:11:08.711992 1 configmapobserver.go:84] configmaps "insights-config" not found I0428 11:11:08.742424 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (99ms) I0428 11:11:08.841456 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (99ms) I0428 11:11:08.906547 1 configmapobserver.go:84] configmaps "insights-config" not found I0428 11:11:08.941266 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (100ms) I0428 11:11:09.041408 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (100ms) I0428 11:11:09.142384 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (101ms) I0428 11:11:09.241794 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (99ms) I0428 11:11:09.341203 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (99ms) I0428 11:11:09.441290 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (100ms) I0428 11:11:09.441324 1 tasks_processing.go:74] worker 1 stopped. E0428 11:11:09.441343 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0428 11:11:09.441654 1 recorder.go:75] Recording config/workload_info with fingerprint=c0148e2f13a017d70f1b0738aafedbeb31e206128c3011d4a92f7afbc61cef03 I0428 11:11:09.441671 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.234365544s to process 1 records E0428 11:11:09.441697 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.234s with: function \"workload_info\" failed with an error" I0428 11:11:09.442796 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0428 11:11:09.442808 1 periodic.go:209] Running conditional gatherer I0428 11:11:09.449610 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0428 11:11:09.455619 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.16:36836->172.30.0.10:53: read: connection refused E0428 11:11:09.455852 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0428 11:11:09.455917 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0428 11:11:09.461545 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0428 11:11:09.461557 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461562 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461565 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461568 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461571 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461575 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461578 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461580 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0428 11:11:09.461583 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0428 11:11:09.461598 1 tasks_processing.go:45] number of workers: 3 I0428 11:11:09.461611 1 tasks_processing.go:69] worker 2 listening for tasks. I0428 11:11:09.461615 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0428 11:11:09.461620 1 tasks_processing.go:69] worker 0 listening for tasks. I0428 11:11:09.461631 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0428 11:11:09.461636 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0428 11:11:09.461639 1 tasks_processing.go:69] worker 1 listening for tasks. I0428 11:11:09.461649 1 tasks_processing.go:74] worker 1 stopped. I0428 11:11:09.461687 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0428 11:11:09.461699 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 590ns to process 1 records I0428 11:11:09.461730 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0428 11:11:09.461739 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.152µs to process 1 records I0428 11:11:09.461744 1 tasks_processing.go:74] worker 0 stopped. I0428 11:11:09.461873 1 tasks_processing.go:74] worker 2 stopped. I0428 11:11:09.461891 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 228.858µs to process 0 records I0428 11:11:09.461918 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.16:36836->172.30.0.10:53: read: connection refused I0428 11:11:09.461945 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0428 11:11:09.484920 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=b1a0624e7e3efa68684c4d8b74b247b6426c8c525d5b54f8e008fbf456e440c3 I0428 11:11:09.485039 1 diskrecorder.go:70] Writing 109 records to /var/lib/insights-operator/insights-2026-04-28-111109.tar.gz I0428 11:11:09.491725 1 diskrecorder.go:51] Wrote 109 records to disk in 6ms I0428 11:11:09.491753 1 periodic.go:278] Gathering cluster info every 2h0m0s I0428 11:11:09.491767 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0428 11:11:20.631614 1 configmapobserver.go:84] configmaps "insights-config" not found I0428 11:11:58.120984 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="dc7a72f064d0403e6dcfb356580659eaed899f9f8daf1780ae3f1e0d23f4e1b1") W0428 11:11:58.121021 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0428 11:11:58.121064 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="b45eb825f4f71eadf7d92c623dfb620b3f76f2d22df8d514c0cce3caa6e98d60")