W0310 19:33:01.055252 1 cmd.go:257] Using insecure, self-signed certificates I0310 19:33:01.374505 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:33:01.374912 1 observer_polling.go:159] Starting file observer W0310 19:33:01.401868 1 builder.go:272] unable to get owner reference (falling back to namespace): replicasets.apps "insights-operator-5ff5cb4f99" is forbidden: User "system:serviceaccount:openshift-insights:operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-insights" I0310 19:33:01.695090 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0310 19:33:01.696173 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0310 19:33:01.696409 1 secure_serving.go:57] Forcing use of http/1.1 only W0310 19:33:01.696428 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0310 19:33:01.696432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0310 19:33:01.696436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0310 19:33:01.696440 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0310 19:33:01.696443 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0310 19:33:01.696446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0310 19:33:01.696826 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0310 19:33:01.700548 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CPMSMachineNamePrefix ChunkSizeMiB CloudDualStackNodeIPs ConsolePluginContentSecurityPolicy DisableKubeletCloudCredentialProviders ExternalOIDC GCPLabelsTags GatewayAPI GatewayAPIController HardwareSpeed IngressControllerLBSubnetsAWS KMSv1 ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles MultiArchInstallAWS MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NodeDisruptionPolicy OnClusterBuild PersistentIPsForVirtualization PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController VSphereDriverConfiguration VSphereMultiVCenters ValidatingAdmissionPolicy AWSClusterHostedDNS AutomatedEtcdBackup BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalOIDCWithUIDAndExtraClaimMappings GCPClusterHostedDNS GCPCustomAPIEndpoints HighlyAvailableArbiter ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InsightsRuntimeExtractor KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PinnedImages PlatformOperators ProcMountType RouteAdvertisements SELinuxChangePolicy SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerification SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass VolumeGroupSnapshot] I0310 19:33:01.700617 1 event.go:377] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-insights", Name:"openshift-insights", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ConsolePluginContentSecurityPolicy", "DisableKubeletCloudCredentialProviders", "ExternalOIDC", "GCPLabelsTags", "GatewayAPI", "GatewayAPIController", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "VSphereDriverConfiguration", "VSphereMultiVCenters", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AutomatedEtcdBackup", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GCPCustomAPIEndpoints", "HighlyAvailableArbiter", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "SELinuxChangePolicy", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0310 19:33:01.703062 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0310 19:33:01.703077 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0310 19:33:01.703079 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0310 19:33:01.703095 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0310 19:33:01.703107 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0310 19:33:01.703111 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0310 19:33:01.703363 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-2940981092/tls.crt::/tmp/serving-cert-2940981092/tls.key" I0310 19:33:01.703607 1 secure_serving.go:213] Serving securely on [::]:8443 E0310 19:33:01.703642 1 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:serviceaccount:openshift-insights:operator\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-insights\"" event="&Event{ObjectMeta:{openshift-insights.189b91cb8b381d0b openshift-insights 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-insights,Name:openshift-insights,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{\"AWSEFSDriverVolumeMetrics\", \"AdditionalRoutingCapabilities\", \"AdminNetworkPolicy\", \"AlibabaPlatform\", \"AzureWorkloadIdentity\", \"BareMetalLoadBalancer\", \"BuildCSIVolumes\", \"CPMSMachineNamePrefix\", \"ChunkSizeMiB\", \"CloudDualStackNodeIPs\", \"ConsolePluginContentSecurityPolicy\", \"DisableKubeletCloudCredentialProviders\", \"ExternalOIDC\", \"GCPLabelsTags\", \"GatewayAPI\", \"GatewayAPIController\", \"HardwareSpeed\", \"IngressControllerLBSubnetsAWS\", \"KMSv1\", \"ManagedBootImages\", \"ManagedBootImagesAWS\", \"MetricsCollectionProfiles\", \"MultiArchInstallAWS\", \"MultiArchInstallGCP\", \"NetworkDiagnosticsConfig\", \"NetworkLiveMigration\", \"NetworkSegmentation\", \"NodeDisruptionPolicy\", \"OnClusterBuild\", \"PersistentIPsForVirtualization\", \"PrivateHostedZoneAWS\", \"RouteExternalCertificate\", \"ServiceAccountTokenNodeBinding\", \"SetEIPForNLBIngressController\", \"VSphereDriverConfiguration\", \"VSphereMultiVCenters\", \"ValidatingAdmissionPolicy\"}, Disabled:[]v1.FeatureGateName{\"AWSClusterHostedDNS\", \"AutomatedEtcdBackup\", \"BootcNodeManagement\", \"ClusterAPIInstall\", \"ClusterAPIInstallIBMCloud\", \"ClusterMonitoringConfig\", \"ClusterVersionOperatorConfiguration\", \"DNSNameResolver\", \"DualReplica\", \"DyanmicServiceEndpointIBMCloud\", \"DynamicResourceAllocation\", \"EtcdBackendQuota\", \"EventedPLEG\", \"Example\", \"Example2\", \"ExternalOIDCWithUIDAndExtraClaimMappings\", \"GCPClusterHostedDNS\", \"GCPCustomAPIEndpoints\", \"HighlyAvailableArbiter\", \"ImageStreamImportMode\", \"IngressControllerDynamicConfigurationManager\", \"InsightsConfig\", \"InsightsConfigAPI\", \"InsightsOnDemandDataGather\", \"InsightsRuntimeExtractor\", \"KMSEncryptionProvider\", \"MachineAPIMigration\", \"MachineAPIOperatorDisableMachineHealthCheckController\", \"MachineAPIProviderOpenStack\", \"MachineConfigNodes\", \"MaxUnavailableStatefulSet\", \"MinimumKubeletVersion\", \"MixedCPUsAllocation\", \"MultiArchInstallAzure\", \"NewOLM\", \"NewOLMCatalogdAPIV1Metas\", \"NewOLMOwnSingleNamespace\", \"NewOLMPreflightPermissionChecks\", \"NodeSwap\", \"NutanixMultiSubnets\", \"OVNObservability\", \"OpenShiftPodSecurityAdmission\", \"PinnedImages\", \"PlatformOperators\", \"ProcMountType\", \"RouteAdvertisements\", \"SELinuxChangePolicy\", \"SELinuxMount\", \"ShortCertRotation\", \"SignatureStores\", \"SigstoreImageVerification\", \"SigstoreImageVerificationPKI\", \"TranslateStreamCloseWebsocketRequests\", \"UpgradeStatus\", \"UserNamespacesPodSecurityStandards\", \"UserNamespacesSupport\", \"VSphereConfigurableMaxAllowedBlockVolumesPerNode\", \"VSphereHostVMGroupZonal\", \"VSphereMultiDisk\", \"VSphereMultiNetworks\", \"VolumeAttributesClass\", \"VolumeGroupSnapshot\"}},Source:EventSource{Component:openshift-insights-operator,Host:,},FirstTimestamp:2026-03-10 19:33:01.700521227 +0000 UTC m=+0.693539858,LastTimestamp:2026-03-10 19:33:01.700521227 +0000 UTC m=+0.693539858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-insights-operator,ReportingInstance:,}" I0310 19:33:01.703650 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0310 19:33:01.707070 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0310 19:33:01.707108 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0310 19:33:01.707217 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0310 19:33:01.713507 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0310 19:33:01.713526 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0310 19:33:01.720131 1 secretconfigobserver.go:119] support secret does not exist I0310 19:33:01.725882 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0310 19:33:01.732002 1 secretconfigobserver.go:119] support secret does not exist I0310 19:33:01.737167 1 recorder.go:156] Pruning old reports every 5h52m30s, max age is 288h0m0s I0310 19:33:01.744200 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0310 19:33:01.744215 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0310 19:33:01.744220 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0310 19:33:01.744224 1 insightsreport.go:296] Starting report retriever I0310 19:33:01.744229 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0310 19:33:01.744201 1 periodic.go:212] Running clusterconfig gatherer I0310 19:33:01.744276 1 tasks_processing.go:45] number of workers: 32 I0310 19:33:01.744296 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:33:01.744309 1 tasks_processing.go:69] worker 31 listening for tasks. I0310 19:33:01.744309 1 tasks_processing.go:71] worker 0 working on version task. I0310 19:33:01.744318 1 tasks_processing.go:69] worker 19 listening for tasks. I0310 19:33:01.744319 1 tasks_processing.go:69] worker 20 listening for tasks. I0310 19:33:01.744326 1 tasks_processing.go:69] worker 2 listening for tasks. I0310 19:33:01.744319 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:33:01.744333 1 tasks_processing.go:69] worker 9 listening for tasks. I0310 19:33:01.744334 1 tasks_processing.go:69] worker 27 listening for tasks. I0310 19:33:01.744327 1 tasks_processing.go:69] worker 26 listening for tasks. I0310 19:33:01.744341 1 tasks_processing.go:69] worker 3 listening for tasks. I0310 19:33:01.744343 1 tasks_processing.go:69] worker 15 listening for tasks. I0310 19:33:01.744347 1 tasks_processing.go:69] worker 28 listening for tasks. I0310 19:33:01.744350 1 tasks_processing.go:69] worker 4 listening for tasks. I0310 19:33:01.744352 1 tasks_processing.go:69] worker 16 listening for tasks. I0310 19:33:01.744349 1 tasks_processing.go:69] worker 25 listening for tasks. I0310 19:33:01.744357 1 tasks_processing.go:69] worker 29 listening for tasks. I0310 19:33:01.744357 1 tasks_processing.go:69] worker 14 listening for tasks. I0310 19:33:01.744363 1 tasks_processing.go:69] worker 17 listening for tasks. I0310 19:33:01.744365 1 tasks_processing.go:69] worker 30 listening for tasks. I0310 19:33:01.744358 1 tasks_processing.go:69] worker 21 listening for tasks. I0310 19:33:01.744368 1 tasks_processing.go:69] worker 22 listening for tasks. I0310 19:33:01.744373 1 tasks_processing.go:69] worker 18 listening for tasks. I0310 19:33:01.744376 1 tasks_processing.go:69] worker 7 listening for tasks. I0310 19:33:01.744368 1 tasks_processing.go:69] worker 6 listening for tasks. I0310 19:33:01.744379 1 tasks_processing.go:69] worker 13 listening for tasks. I0310 19:33:01.744377 1 tasks_processing.go:69] worker 11 listening for tasks. I0310 19:33:01.744387 1 tasks_processing.go:69] worker 24 listening for tasks. I0310 19:33:01.744390 1 tasks_processing.go:71] worker 26 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0310 19:33:01.744388 1 tasks_processing.go:71] worker 2 working on node_logs task. I0310 19:33:01.744394 1 tasks_processing.go:71] worker 31 working on sap_datahubs task. I0310 19:33:01.744381 1 tasks_processing.go:69] worker 23 listening for tasks. I0310 19:33:01.744402 1 tasks_processing.go:71] worker 15 working on nodes task. I0310 19:33:01.744403 1 tasks_processing.go:71] worker 24 working on oauths task. I0310 19:33:01.744410 1 tasks_processing.go:71] worker 17 working on dvo_metrics task. I0310 19:33:01.744447 1 tasks_processing.go:71] worker 9 working on proxies task. I0310 19:33:01.744481 1 tasks_processing.go:71] worker 18 working on machine_healthchecks task. I0310 19:33:01.744526 1 tasks_processing.go:71] worker 14 working on storage_cluster task. I0310 19:33:01.744372 1 tasks_processing.go:69] worker 12 listening for tasks. I0310 19:33:01.744685 1 tasks_processing.go:71] worker 12 working on certificate_signing_requests task. I0310 19:33:01.744482 1 tasks_processing.go:71] worker 7 working on overlapping_namespace_uids task. I0310 19:33:01.744404 1 tasks_processing.go:71] worker 23 working on sap_config task. I0310 19:33:01.744389 1 tasks_processing.go:71] worker 19 working on metrics task. W0310 19:33:01.744973 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:33:01.744395 1 tasks_processing.go:71] worker 11 working on schedulers task. I0310 19:33:01.744393 1 tasks_processing.go:71] worker 20 working on clusterroles task. I0310 19:33:01.744396 1 tasks_processing.go:71] worker 3 working on openstack_controlplanes task. I0310 19:33:01.744378 1 tasks_processing.go:69] worker 10 listening for tasks. I0310 19:33:01.744474 1 tasks_processing.go:71] worker 30 working on aggregated_monitoring_cr_names task. I0310 19:33:01.745258 1 tasks_processing.go:71] worker 10 working on storage_classes task. I0310 19:33:01.744475 1 tasks_processing.go:71] worker 1 working on image_registries task. I0310 19:33:01.744477 1 tasks_processing.go:71] worker 21 working on pdbs task. I0310 19:33:01.744486 1 tasks_processing.go:71] worker 6 working on jaegers task. I0310 19:33:01.744491 1 tasks_processing.go:71] worker 27 working on openshift_logging task. I0310 19:33:01.744497 1 tasks_processing.go:71] worker 16 working on cost_management_metrics_configs task. I0310 19:33:01.744501 1 tasks_processing.go:71] worker 28 working on silenced_alerts task. I0310 19:33:01.744504 1 tasks_processing.go:71] worker 4 working on container_images task. W0310 19:33:01.745872 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:33:01.744511 1 tasks_processing.go:71] worker 22 working on monitoring_persistent_volumes task. I0310 19:33:01.744516 1 tasks_processing.go:71] worker 29 working on service_accounts task. I0310 19:33:01.744520 1 tasks_processing.go:71] worker 25 working on openstack_dataplanenodesets task. I0310 19:33:01.744361 1 tasks_processing.go:69] worker 5 listening for tasks. I0310 19:33:01.746248 1 tasks_processing.go:71] worker 5 working on openstack_dataplanedeployments task. I0310 19:33:01.744309 1 tasks_processing.go:69] worker 8 listening for tasks. I0310 19:33:01.744392 1 tasks_processing.go:71] worker 13 working on lokistack task. I0310 19:33:01.746408 1 tasks_processing.go:71] worker 8 working on olm_operators task. I0310 19:33:01.745010 1 tasks_processing.go:71] worker 19 working on machine_configs task. I0310 19:33:01.745021 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 56.624µs to process 0 records I0310 19:33:01.746482 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 31.631µs to process 0 records I0310 19:33:01.746505 1 tasks_processing.go:71] worker 28 working on machines task. I0310 19:33:01.748703 1 tasks_processing.go:71] worker 14 working on networks task. I0310 19:33:01.748716 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 4.133905ms to process 0 records E0310 19:33:01.748836 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0310 19:33:01.748847 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 4.220732ms to process 0 records I0310 19:33:01.748857 1 tasks_processing.go:71] worker 18 working on image task. I0310 19:33:01.754087 1 tasks_processing.go:71] worker 23 working on machine_sets task. I0310 19:33:01.754122 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 9.187794ms to process 0 records I0310 19:33:01.754134 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 9.713578ms to process 0 records I0310 19:33:01.754143 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 8.434744ms to process 0 records I0310 19:33:01.754152 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 8.588288ms to process 0 records I0310 19:33:01.754153 1 tasks_processing.go:71] worker 31 working on openshift_machine_api_events task. I0310 19:33:01.754159 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 8.496149ms to process 0 records I0310 19:33:01.754167 1 tasks_processing.go:71] worker 27 working on install_plans task. I0310 19:33:01.754175 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 7.912826ms to process 0 records I0310 19:33:01.754189 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 7.909835ms to process 0 records I0310 19:33:01.754212 1 tasks_processing.go:71] worker 13 working on machine_config_pools task. I0310 19:33:01.754250 1 tasks_processing.go:71] worker 16 working on active_alerts task. I0310 19:33:01.754280 1 tasks_processing.go:71] worker 6 working on ingress task. W0310 19:33:01.754322 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:33:01.754356 1 tasks_processing.go:71] worker 16 working on mutating_webhook_configurations task. I0310 19:33:01.754376 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 68.578µs to process 0 records I0310 19:33:01.754531 1 tasks_processing.go:71] worker 5 working on operators_pods_and_events task. I0310 19:33:01.764004 1 tasks_processing.go:71] worker 25 working on machine_autoscalers task. I0310 19:33:01.764263 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 17.839094ms to process 0 records E0310 19:33:01.764320 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0310 19:33:01.764352 1 gather.go:177] gatherer "clusterconfig" function "machines" took 17.519729ms to process 0 records I0310 19:33:01.764505 1 recorder.go:70] Recording config/proxy with fingerprint=3ab48ef8ccf126ca04f6bdc6ec35413198643ceb45394b055d2fb0b170b6c618 I0310 19:33:01.764552 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 19.740539ms to process 1 records I0310 19:33:01.764585 1 tasks_processing.go:71] worker 9 working on feature_gates task. I0310 19:33:01.764844 1 recorder.go:70] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=e542372b7472625b062535240f1ef2cf4c469ad183da268cf3dd2ea19215d7ac I0310 19:33:01.764883 1 recorder.go:70] Recording config/pdbs/openshift-ingress/router-default with fingerprint=a1250af9b7aa0ccaa99845a61a41e64a79ce985f93e2b6cf53427a518f0f626f I0310 19:33:01.764909 1 recorder.go:70] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=e0447f794246126f34dd6af4b4b556027be2eab3b916f11bef8def4fdf852d83 I0310 19:33:01.764926 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 19.12661ms to process 3 records I0310 19:33:01.764934 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 19.636079ms to process 0 records I0310 19:33:01.764943 1 tasks_processing.go:71] worker 3 working on support_secret task. I0310 19:33:01.764914 1 tasks_processing.go:71] worker 28 working on pod_network_connectivity_checks task. I0310 19:33:01.765338 1 tasks_processing.go:71] worker 24 working on crds task. I0310 19:33:01.765345 1 tasks_processing.go:71] worker 21 working on sap_pods task. I0310 19:33:01.765360 1 recorder.go:70] Recording config/oauth with fingerprint=086dca6e9fb279e54ed4f8b63c0a62e861d6667858c1d02dacc365125dc16510 I0310 19:33:01.765373 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 20.569273ms to process 1 records I0310 19:33:01.765479 1 recorder.go:70] Recording config/storage/storageclasses/gp2-csi with fingerprint=4de072d811bcf9824eb4432282429b9bb6d3e445dfbeb8a5860c1a759b80ee65 I0310 19:33:01.765543 1 recorder.go:70] Recording config/storage/storageclasses/gp3-csi with fingerprint=5db8c47ce4521357bb2f6ff39f54a9ee2f1b5812cb88e41cc9e5e3ec49df3600 I0310 19:33:01.766764 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 19.94281ms to process 2 records I0310 19:33:01.770451 1 recorder.go:70] Recording config/node/ip-10-0-0-93.ec2.internal with fingerprint=bf91ce3c98f323a8b895fc16a7a34225205a69362753b9f2131c433e630c5818 I0310 19:33:01.770644 1 recorder.go:70] Recording config/node/ip-10-0-1-31.ec2.internal with fingerprint=4aa423f455967017341d1c63ebc140065b8bcdf6b91de556022b25a0dda27771 I0310 19:33:01.770824 1 recorder.go:70] Recording config/node/ip-10-0-2-94.ec2.internal with fingerprint=e1cf1faa448f24c80803a784ff582802c499d00b68fc3f6dc51518c2c40eb14d I0310 19:33:01.770875 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 20.839877ms to process 3 records I0310 19:33:01.770995 1 recorder.go:70] Recording config/schedulers/cluster with fingerprint=63d925be688b016f696741f305c0e3b4694487ac92caf486cca20fa905aca005 I0310 19:33:01.771009 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 20.291728ms to process 1 records I0310 19:33:01.771017 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 22.410929ms to process 0 records I0310 19:33:01.771023 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 26.273338ms to process 0 records I0310 19:33:01.771038 1 tasks_processing.go:71] worker 12 working on authentication task. I0310 19:33:01.766837 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0310 19:33:01.771252 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0310 19:33:01.771263 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0310 19:33:01.771268 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0310 19:33:01.771294 1 controller.go:458] The operator is still being initialized I0310 19:33:01.771302 1 controller.go:481] The operator is healthy I0310 19:33:01.771479 1 tasks_processing.go:71] worker 2 working on config_maps task. I0310 19:33:01.771533 1 tasks_processing.go:71] worker 15 working on validating_webhook_configurations task. I0310 19:33:01.771521 1 tasks_processing.go:71] worker 11 working on image_pruners task. I0310 19:33:01.768866 1 tasks_processing.go:71] worker 10 working on nodenetworkconfigurationpolicies task. I0310 19:33:01.771709 1 tasks_processing.go:71] worker 1 working on infrastructures task. I0310 19:33:01.773198 1 recorder.go:70] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=cc211ac546a0119503c2905b77dac9a03197ed09ce30f8883d58c8055811535e I0310 19:33:01.773227 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 26.40832ms to process 1 records I0310 19:33:01.774052 1 tasks_processing.go:71] worker 7 working on operators task. I0310 19:33:01.774085 1 recorder.go:70] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0310 19:33:01.774097 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 29.221359ms to process 1 records I0310 19:33:01.774808 1 tasks_processing.go:71] worker 28 working on tsdb_status task. W0310 19:33:01.774842 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory E0310 19:33:01.774863 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0310 19:33:01.774909 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 9.815131ms to process 0 records I0310 19:33:01.774919 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 20.74062ms to process 0 records I0310 19:33:01.774925 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 29.219µs to process 0 records I0310 19:33:01.774931 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 28.919238ms to process 0 records I0310 19:33:01.774949 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 10.816819ms to process 0 records I0310 19:33:01.774957 1 tasks_processing.go:71] worker 25 working on ceph_cluster task. I0310 19:33:01.775104 1 tasks_processing.go:71] worker 22 working on openstack_version task. I0310 19:33:01.775157 1 tasks_processing.go:71] worker 28 working on container_runtime_configs task. I0310 19:33:01.775183 1 tasks_processing.go:71] worker 23 working on cluster_apiserver task. I0310 19:33:01.775266 1 tasks_processing.go:71] worker 13 working on ingress_certificates task. I0310 19:33:01.775341 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 21.039018ms to process 0 records I0310 19:33:01.775665 1 tasks_processing.go:71] worker 19 working on nodenetworkstates task. E0310 19:33:01.775870 1 gather.go:140] gatherer "clusterconfig" function "machine_configs" failed with the error: getting MachineConfigPools failed: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io) I0310 19:33:01.775951 1 recorder.go:70] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0310 19:33:01.775975 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 29.122115ms to process 1 records W0310 19:33:01.778771 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:33:01.779039 1 tasks_processing.go:74] worker 21 stopped. I0310 19:33:01.779096 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 13.679346ms to process 0 records I0310 19:33:01.782615 1 tasks_processing.go:74] worker 10 stopped. I0310 19:33:01.782633 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 10.541886ms to process 0 records I0310 19:33:01.782864 1 tasks_processing.go:74] worker 18 stopped. I0310 19:33:01.783000 1 recorder.go:70] Recording config/image with fingerprint=97fd117f08a1e06a56162f8586591aaf0f65f31c54f4df54b843fa1d0459c412 I0310 19:33:01.783061 1 gather.go:177] gatherer "clusterconfig" function "image" took 33.989341ms to process 1 records I0310 19:33:01.783165 1 tasks_processing.go:74] worker 14 stopped. I0310 19:33:01.783350 1 recorder.go:70] Recording config/network with fingerprint=00da2ccdd77c8dfa00971cb2a103c69280eb3edc7bf4f148b80042953bc3adff I0310 19:33:01.783369 1 gather.go:177] gatherer "clusterconfig" function "networks" took 34.107653ms to process 1 records I0310 19:33:01.783892 1 tasks_processing.go:74] worker 4 stopped. I0310 19:33:01.785499 1 recorder.go:70] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-cjqb6 with fingerprint=ffdfa9be034b1c490593132f0aafc4204d22091c4ed1bd8509aaa020ca772e9b I0310 19:33:01.785849 1 recorder.go:70] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-glx2x with fingerprint=59990eb18f5c36be82927e8dd0061b63c3e88009d0323b746bf132c3fc3ac71d I0310 19:33:01.786201 1 recorder.go:70] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-rx8fr with fingerprint=8632e8f712e44aa25aa4442fccfa6e871277d487fa39e8f61d10835ebb275da4 I0310 19:33:01.786280 1 recorder.go:70] Recording config/running_containers with fingerprint=58c92ee7b2395e24027e69218d4713a463acbc30b8acdb1d0f5335952e426ad7 I0310 19:33:01.786295 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 38.015877ms to process 4 records I0310 19:33:01.786805 1 tasks_processing.go:74] worker 31 stopped. I0310 19:33:01.786890 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 32.64165ms to process 0 records I0310 19:33:01.786971 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 11.73781ms to process 0 records I0310 19:33:01.786983 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 11.760135ms to process 0 records I0310 19:33:01.786993 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 11.996811ms to process 0 records I0310 19:33:01.787002 1 tasks_processing.go:74] worker 25 stopped. I0310 19:33:01.787008 1 tasks_processing.go:74] worker 22 stopped. I0310 19:33:01.787014 1 tasks_processing.go:74] worker 28 stopped. I0310 19:33:01.787124 1 tasks_processing.go:74] worker 19 stopped. I0310 19:33:01.787141 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 11.44167ms to process 0 records I0310 19:33:01.791320 1 tasks_processing.go:74] worker 3 stopped. E0310 19:33:01.791365 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0310 19:33:01.791423 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 26.369698ms to process 0 records I0310 19:33:01.791593 1 recorder.go:70] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=105afd5163c93f0480427f3406920029dbfa1a745c34ce65306d3ad210ed6d14 I0310 19:33:01.791650 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 19.64803ms to process 1 records I0310 19:33:01.791722 1 tasks_processing.go:74] worker 11 stopped. I0310 19:33:01.791869 1 recorder.go:70] Recording config/ingress with fingerprint=303185b902decad00a9e7e815a3846fd6b6603de8af435a7ca74fcb4e8d42cd2 I0310 19:33:01.791927 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 37.328225ms to process 1 records I0310 19:33:01.791892 1 tasks_processing.go:74] worker 6 stopped. I0310 19:33:01.792068 1 tasks_processing.go:74] worker 20 stopped. I0310 19:33:01.792115 1 recorder.go:70] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=1dbab9aa49cb2b5afc2d2551558fe5b6a1d96b337df80c2c2156481157febbf7 I0310 19:33:01.792201 1 recorder.go:70] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=546c54d8aaf0e1fb8cd203962f4682eb892bab925566f4c851a8900375ed69c4 I0310 19:33:01.792211 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 46.496499ms to process 2 records I0310 19:33:01.792297 1 tasks_processing.go:74] worker 1 stopped. I0310 19:33:01.792805 1 recorder.go:70] Recording config/infrastructure with fingerprint=970b627dc18fb31da39d9cf395f9baccab81323f6724e348cea36edc5c532b14 I0310 19:33:01.792820 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 19.925611ms to process 1 records I0310 19:33:01.792902 1 tasks_processing.go:74] worker 12 stopped. I0310 19:33:01.792979 1 recorder.go:70] Recording config/authentication with fingerprint=b148b7589b946741f5cda922c7c69bf2c9f1b445990606ae73ce97e389487754 I0310 19:33:01.792989 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 21.167417ms to process 1 records I0310 19:33:01.794161 1 tasks_processing.go:74] worker 9 stopped. I0310 19:33:01.794366 1 recorder.go:70] Recording config/featuregate with fingerprint=b842e10d08e7497063935a3b488aa7890849cfccc4a00e124033305725d3bd73 I0310 19:33:01.794433 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 29.544882ms to process 1 records I0310 19:33:01.803372 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0310 19:33:01.803445 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0310 19:33:01.803453 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0310 19:33:01.804611 1 tasks_processing.go:74] worker 8 stopped. I0310 19:33:01.804674 1 recorder.go:70] Recording config/olm_operators with fingerprint=ccd47bde0effbfc030be5271fb335ce5b4babf5177739f92841ef384b4df1a7b I0310 19:33:01.804690 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 58.152336ms to process 1 records I0310 19:33:01.807417 1 base_controller.go:82] Caches are synced for ConfigController I0310 19:33:01.807431 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0310 19:33:01.807555 1 tasks_processing.go:74] worker 26 stopped. I0310 19:33:01.807598 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 63.153162ms to process 0 records I0310 19:33:01.807926 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0310 19:33:01.807992 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s W0310 19:33:01.808081 1 operator.go:287] started I0310 19:33:01.808104 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0310 19:33:01.812181 1 tasks_processing.go:74] worker 23 stopped. I0310 19:33:01.812401 1 recorder.go:70] Recording config/apiserver with fingerprint=5c3d756ec26067948996c603329af214c6d8fd02e6d7d522341aa9351e0394a1 I0310 19:33:01.812425 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 36.987672ms to process 1 records I0310 19:33:01.812485 1 tasks_processing.go:74] worker 30 stopped. I0310 19:33:01.812501 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 67.220705ms to process 0 records I0310 19:33:01.824281 1 tasks_processing.go:74] worker 24 stopped. I0310 19:33:01.825027 1 recorder.go:70] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=e9dc18544ee4ea5c321f4f5d4d7028d7b8501578f2ef4017dcc4083ae44ea0b1 I0310 19:33:01.825411 1 recorder.go:70] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=84996649a7971b532d706eb7d756d2a40659a1dfd10212a1349eb1263735ba76 I0310 19:33:01.825429 1 gather.go:177] gatherer "clusterconfig" function "crds" took 58.91549ms to process 2 records I0310 19:33:01.834163 1 tasks_processing.go:74] worker 15 stopped. I0310 19:33:01.834439 1 recorder.go:70] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=7ac2c846118b169ce73acbda0a2b228f27374aa092430a6c7c50bbeb1fb5c54f I0310 19:33:01.834530 1 recorder.go:70] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=d2ddd5204466741f6a9c858037b13af6df371b8597bf0ba9805fee54a512de6a I0310 19:33:01.834553 1 recorder.go:70] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=bf6a3a7728a811120836a1e60cb41e94c74bdc176be24954fcc090c2974a057a I0310 19:33:01.834586 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=1a48f4f0a2d6cf1d1be543b7f7926de5895a6ead408754e848569213dda66803 I0310 19:33:01.834631 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=ac8a094d5572e58427841737d39d072df2ae03a9771a4f3ecf18e740bc99ce63 I0310 19:33:01.834663 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=862fd4d110e33c75995eaba53db51de00ae60b8fc4a391414e66043a1c29c55e I0310 19:33:01.834709 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=0340499f685fbdf0b461f421552b543e75d317cdfd4b650df620a1b2339fb00d I0310 19:33:01.834759 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=d7f7fbd2bf316b36d46edc23d53f9cb52cb3fdc4ce65dd0d65040c83b05183b3 I0310 19:33:01.834793 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=739c51a280e57ccf20ec8c23482311dd2859847ec856b2dcd40644b3538fe0b4 I0310 19:33:01.834833 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=75fcc99ff8c99d30fc9977b43b7c792a6af9f2a2c616cbd24b4a2293e4d28a89 I0310 19:33:01.834843 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 62.586051ms to process 10 records I0310 19:33:01.836756 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:33:01.836951 1 controller.go:203] Source scaController *sca.Controller is not ready I0310 19:33:01.837004 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0310 19:33:01.837030 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0310 19:33:01.837053 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0310 19:33:01.837063 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0310 19:33:01.837088 1 controller.go:458] The operator is still being initialized I0310 19:33:01.837096 1 controller.go:481] The operator is healthy I0310 19:33:01.837116 1 prometheus_rules.go:88] Prometheus rules successfully created E0310 19:33:01.864975 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27fafeefed-f842-42a4-954a-e9b801536622%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:44363->172.30.0.10:53: read: connection refused I0310 19:33:01.864991 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27fafeefed-f842-42a4-954a-e9b801536622%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:44363->172.30.0.10:53: read: connection refused I0310 19:33:01.902016 1 tasks_processing.go:74] worker 0 stopped. I0310 19:33:01.902614 1 recorder.go:70] Recording config/version with fingerprint=e2e7d798bdea0afaa8038a6a9c69e79481494e10fef9876d944bdf439782d6c0 I0310 19:33:01.902639 1 recorder.go:70] Recording config/id with fingerprint=42a86d5f645aff85f6be9f2e220ab7faa856941bb8e9cb55d227f5cbb8ae5e8e I0310 19:33:01.902647 1 gather.go:177] gatherer "clusterconfig" function "version" took 157.693132ms to process 2 records I0310 19:33:01.902756 1 tasks_processing.go:74] worker 16 stopped. I0310 19:33:01.902767 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=5300d53f4f87b6cb4d3cc07ec3b10055d2e701db88712227b9e9ee4866d01d9b I0310 19:33:01.902855 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=5b44318aa32ada6e92e4f04adc60993a5a17f2be14f1653ff4857780a5be4200 I0310 19:33:01.902907 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=45d0201d87b2d84699e223b86aeeba0fc4eae873d4b1ddcbf89d9e802f4c7d9c I0310 19:33:01.902921 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 147.903233ms to process 3 records I0310 19:33:01.909123 1 base_controller.go:82] Caches are synced for LoggingSyncer I0310 19:33:01.909138 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0310 19:33:02.013097 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0310 19:33:02.016762 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:43017->172.30.0.10:53: read: connection refused I0310 19:33:02.016777 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.129.0.9:43017->172.30.0.10:53: read: connection refused I0310 19:33:02.021227 1 tasks_processing.go:74] worker 2 stopped. E0310 19:33:02.021240 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0310 19:33:02.021246 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0310 19:33:02.021252 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0310 19:33:02.021264 1 recorder.go:70] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=295a2bc7810c628251f1f312c209b5eb4da888326706c635ac0c12aec0969e17 I0310 19:33:02.021294 1 recorder.go:70] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0310 19:33:02.021302 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0310 19:33:02.021308 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=fe7abfa7f7aea852d8bca6b7df9b4d5de32045254ff2774f9bd30f0b6dcb7dc4 I0310 19:33:02.021312 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0310 19:33:02.021357 1 recorder.go:70] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0310 19:33:02.021367 1 recorder.go:70] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0310 19:33:02.021379 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 249.675544ms to process 7 records I0310 19:33:02.031520 1 tasks_processing.go:74] worker 13 stopped. E0310 19:33:02.031538 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0310 19:33:02.031546 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ov1t6uc9nv2m79j3ja91bl0ks3h5r3s-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ov1t6uc9nv2m79j3ja91bl0ks3h5r3s-primary-cert-bundle-secret" not found I0310 19:33:02.031628 1 recorder.go:70] Recording aggregated/ingress_controllers_certs with fingerprint=804aedb72599df1beb79201ce982bad202bff485007b0ab6752c730fac2ca035 I0310 19:33:02.031651 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 256.209892ms to process 1 records I0310 19:33:02.098565 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "dnsrecords" in API group "ingress.operator.openshift.io" in the namespace "openshift-ingress-operator" I0310 19:33:02.103586 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "dnsrecords" in API group "ingress.operator.openshift.io" in the namespace "openshift-ingress" I0310 19:33:02.275260 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0310 19:33:02.275282 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0310 19:33:02.275515 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-6r2gc pod in namespace openshift-dns (previous: false). I0310 19:33:02.439098 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6r2gc pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-6r2gc\" is waiting to start: ContainerCreating" I0310 19:33:02.439118 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-6r2gc\" is waiting to start: ContainerCreating" I0310 19:33:02.439126 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-6r2gc pod in namespace openshift-dns (previous: false). I0310 19:33:02.448292 1 gather_cluster_operators.go:184] Unable to get operatorpkis.network.operator.openshift.io resource due to: operatorpkis.network.operator.openshift.io "ovn" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "operatorpkis" in API group "network.operator.openshift.io" in the namespace "openshift-ovn-kubernetes" I0310 19:33:02.630308 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-6r2gc pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-6r2gc\" is waiting to start: ContainerCreating" I0310 19:33:02.630327 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-6r2gc\" is waiting to start: ContainerCreating" I0310 19:33:02.630354 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-ds46j pod in namespace openshift-dns (previous: false). I0310 19:33:02.647465 1 gather_cluster_operators.go:184] Unable to get operatorpkis.network.operator.openshift.io resource due to: operatorpkis.network.operator.openshift.io "signer" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "operatorpkis" in API group "network.operator.openshift.io" in the namespace "openshift-ovn-kubernetes" W0310 19:33:02.776319 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:33:02.845032 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-ds46j pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-ds46j\" is waiting to start: ContainerCreating" I0310 19:33:02.845053 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-ds46j\" is waiting to start: ContainerCreating" I0310 19:33:02.845065 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-ds46j pod in namespace openshift-dns (previous: false). I0310 19:33:03.021725 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-ds46j pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-ds46j\" is waiting to start: ContainerCreating" I0310 19:33:03.021755 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-ds46j\" is waiting to start: ContainerCreating" I0310 19:33:03.021795 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-kdxgn pod in namespace openshift-dns (previous: false). I0310 19:33:03.244654 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-kdxgn pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-kdxgn\" is waiting to start: ContainerCreating" I0310 19:33:03.244673 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-kdxgn\" is waiting to start: ContainerCreating" I0310 19:33:03.244681 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-kdxgn pod in namespace openshift-dns (previous: false). I0310 19:33:03.418054 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-kdxgn pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-kdxgn\" is waiting to start: ContainerCreating" I0310 19:33:03.418073 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-kdxgn\" is waiting to start: ContainerCreating" I0310 19:33:03.418083 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-j75lm pod in namespace openshift-dns (previous: false). I0310 19:33:03.451461 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0310 19:33:03.616845 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:03.616863 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-j8k2b pod in namespace openshift-dns (previous: false). W0310 19:33:03.776265 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:33:03.816952 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:03.816970 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-wx6k2 pod in namespace openshift-dns (previous: false). I0310 19:33:03.853538 1 tasks_processing.go:74] worker 7 stopped. I0310 19:33:03.853593 1 recorder.go:70] Recording config/clusteroperator/console with fingerprint=31015e5ec039d69ffca8e0b3492017b458d5f93596343d536dd5a644c0f64da5 I0310 19:33:03.853629 1 recorder.go:70] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=b8c06bb028dce9b36db7743220e0cacbe7f9ca8ddd20fc6a3832cde83364b1fa I0310 19:33:03.853660 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0310 19:33:03.853692 1 recorder.go:70] Recording config/clusteroperator/dns with fingerprint=3522a21ad2fc319ba292364b98dcdfeb1e03232074a5be86755f882162a58e16 I0310 19:33:03.853711 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0310 19:33:03.853748 1 recorder.go:70] Recording config/clusteroperator/image-registry with fingerprint=44c2f019b437efa666593f9cbdddff3f371dae8f75ad03acca891771060ee637 I0310 19:33:03.853780 1 recorder.go:70] Recording config/clusteroperator/ingress with fingerprint=4ea2ae8e2a55b5c5e1751cab0d19708a82414a2b48f7fa3f4db1942880015e02 I0310 19:33:03.853804 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=5140fc294a1b5a26320c253d2d67aba83fac0c233ed647001e7a3f633cd83398 I0310 19:33:03.853834 1 recorder.go:70] Recording config/clusteroperator/insights with fingerprint=04365223f16bfae0567ccb127133b959a8e5fa12fadeab7856972d4d909db7cc I0310 19:33:03.853845 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=e5ff11d57817f84a678f6fa9565af55bd1120227c16a21933637ab62675a6d70 I0310 19:33:03.853863 1 recorder.go:70] Recording config/clusteroperator/kube-apiserver with fingerprint=01c68b04408009386b49e0f57f1976dbdb3e22a6e70f0f114213d41d56e49de2 I0310 19:33:03.853873 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0310 19:33:03.853889 1 recorder.go:70] Recording config/clusteroperator/kube-controller-manager with fingerprint=ee3070ad33b45ac79c863aeb290f99d3ac48bd73a7eed36bf3b624ccbb2677bc I0310 19:33:03.853902 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0310 19:33:03.853919 1 recorder.go:70] Recording config/clusteroperator/kube-scheduler with fingerprint=df15af0f2dcb9f9b09e6727eaf65d054a320d9331074fd3c0dcfa3b578d90e2a I0310 19:33:03.853930 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0310 19:33:03.853951 1 recorder.go:70] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=dd21825844392a21ad424fb1db8f4c35af822a8047e5025e9df7aebd9905e53e I0310 19:33:03.853962 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0310 19:33:03.854004 1 recorder.go:70] Recording config/clusteroperator/monitoring with fingerprint=7beaade9e582d135ea1880c8a4d6f12a2ddd1bef7084f08bf5fbdc5af60653bc I0310 19:33:03.854125 1 recorder.go:70] Recording config/clusteroperator/network with fingerprint=b566fff26daf05230b3faf3f05b31daf6a6f237a2d9011f10aca1a4ff44851c9 I0310 19:33:03.854151 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0310 19:33:03.854173 1 recorder.go:70] Recording config/clusteroperator/node-tuning with fingerprint=a619f77077149a8e51ff79f8d2ee9a45c216a90d5d9077a7d21ae9f6c8daf075 I0310 19:33:03.854197 1 recorder.go:70] Recording config/clusteroperator/openshift-apiserver with fingerprint=f0571a20c494faa9fd2c0219906a5f7739684d3aa6ddc74ad95883761fbb797e I0310 19:33:03.854206 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0310 19:33:03.854223 1 recorder.go:70] Recording config/clusteroperator/openshift-controller-manager with fingerprint=9c32bcc01d65a7a3ce2b2dd469e5cd4d3ef62e0da74489ab42953954b8e0b4b9 I0310 19:33:03.854232 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0310 19:33:03.854245 1 recorder.go:70] Recording config/clusteroperator/openshift-samples with fingerprint=3b13aa1fb0e65da11b8d8423dad130ff896935bac2647d261a6642f5ad2ca47a I0310 19:33:03.854261 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=fc41b0352bca3856451fc7a5c2e22d92f6243058183ba1af4825520d62570f18 I0310 19:33:03.854278 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=fcb3907083e8fef173475cd9364c1ba1a7e28fad4ab46cfe2d6f64f71efed5a1 I0310 19:33:03.854296 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=d2c61cf75e08c770a2b254f243267e17282d96ef3fcec6667120f198c727b65f I0310 19:33:03.854309 1 recorder.go:70] Recording config/clusteroperator/service-ca with fingerprint=0b75ce2c57fed2fc3b388d37917c32ef97dc574edc3d568815058f9b4f274cf1 I0310 19:33:03.854331 1 recorder.go:70] Recording config/clusteroperator/storage with fingerprint=78b04b586e883e9c7c7b2ab60b043847aaa69896fbd3001e6ba8573dc8217588 I0310 19:33:03.854350 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0310 19:33:03.854360 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0310 19:33:03.854367 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.079461765s to process 34 records I0310 19:33:04.016824 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:04.016884 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-6587b4c985-8tv9m pod in namespace openshift-image-registry (previous: false). I0310 19:33:04.218674 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-6587b4c985-8tv9m pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6587b4c985-8tv9m\" is waiting to start: ContainerCreating" I0310 19:33:04.218692 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-6587b4c985-8tv9m\" is waiting to start: ContainerCreating" I0310 19:33:04.218760 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-6c8fd6cf54-gzp5v pod in namespace openshift-image-registry (previous: false). I0310 19:33:04.419295 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-6c8fd6cf54-gzp5v pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6c8fd6cf54-gzp5v\" is waiting to start: ContainerCreating" I0310 19:33:04.419315 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-6c8fd6cf54-gzp5v\" is waiting to start: ContainerCreating" I0310 19:33:04.419357 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-6c8fd6cf54-wht99 pod in namespace openshift-image-registry (previous: false). I0310 19:33:04.618586 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-6c8fd6cf54-wht99 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-6c8fd6cf54-wht99\" is waiting to start: ContainerCreating" I0310 19:33:04.618615 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-6c8fd6cf54-wht99\" is waiting to start: ContainerCreating" I0310 19:33:04.618630 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-6px7d pod in namespace openshift-image-registry (previous: false). W0310 19:33:04.776378 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:33:04.817377 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:04.817396 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-n6jjg pod in namespace openshift-image-registry (previous: false). I0310 19:33:05.018118 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:05.018135 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-v56q4 pod in namespace openshift-image-registry (previous: false). I0310 19:33:05.216540 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:33:05.216558 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-545fc58fb5-sf6tn pod in namespace openshift-ingress (previous: false). I0310 19:33:05.419066 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-545fc58fb5-sf6tn pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-545fc58fb5-sf6tn\" is waiting to start: ContainerCreating" I0310 19:33:05.419085 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-545fc58fb5-sf6tn\" is waiting to start: ContainerCreating" I0310 19:33:05.419096 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6d95686b9-62xm4 pod in namespace openshift-ingress (previous: false). I0310 19:33:05.619296 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6d95686b9-62xm4 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6d95686b9-62xm4\" is waiting to start: ContainerCreating" I0310 19:33:05.619316 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6d95686b9-62xm4\" is waiting to start: ContainerCreating" I0310 19:33:05.619329 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6d95686b9-dq9qz pod in namespace openshift-ingress (previous: false). W0310 19:33:05.776676 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:33:05.821537 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6d95686b9-dq9qz pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6d95686b9-dq9qz\" is waiting to start: ContainerCreating" I0310 19:33:05.821560 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6d95686b9-dq9qz\" is waiting to start: ContainerCreating" I0310 19:33:05.821599 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-m9x8d pod in namespace openshift-ingress-canary (previous: false). I0310 19:33:06.026030 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-m9x8d pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-m9x8d\" is waiting to start: ContainerCreating" I0310 19:33:06.026049 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-m9x8d\" is waiting to start: ContainerCreating" I0310 19:33:06.026092 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-mx94p pod in namespace openshift-ingress-canary (previous: false). I0310 19:33:06.217210 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-mx94p pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-mx94p\" is waiting to start: ContainerCreating" I0310 19:33:06.217232 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-mx94p\" is waiting to start: ContainerCreating" I0310 19:33:06.217271 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-nsdv7 pod in namespace openshift-ingress-canary (previous: false). I0310 19:33:06.433128 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-nsdv7 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-nsdv7\" is waiting to start: ContainerCreating" I0310 19:33:06.433146 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-nsdv7\" is waiting to start: ContainerCreating" I0310 19:33:06.433175 1 tasks_processing.go:74] worker 5 stopped. I0310 19:33:06.433283 1 recorder.go:70] Recording events/openshift-dns with fingerprint=c6e1fd5bde51173be85f41168ab8d40a0df2ce8f81271b885943b0863466afd3 I0310 19:33:06.433382 1 recorder.go:70] Recording events/openshift-image-registry with fingerprint=0202cbebce1b8e2bc4864f4fa7174378054e50fa66d80409161fef951a991ddc I0310 19:33:06.433438 1 recorder.go:70] Recording events/openshift-ingress-operator with fingerprint=20b9c34b49c3300b337dfdf91537af0376ce25cc1e38ba6d714bc530f67a01ba I0310 19:33:06.433485 1 recorder.go:70] Recording events/openshift-ingress with fingerprint=7ff693a2dd8813bbd2760eb298abb3c4a1cfbfd8ffe42b10d8d31455a6a15afd I0310 19:33:06.433503 1 recorder.go:70] Recording events/openshift-ingress-canary with fingerprint=84957f2ba0660f33e676049c2fcf1ec85ad652764aa043658898cb22ede8f177 I0310 19:33:06.433634 1 recorder.go:70] Recording config/pod/openshift-dns/dns-default-6r2gc with fingerprint=bc222ab823d406c1363c65c62ea0ced54613b268fd6782b9b791f8882fcec7e9 I0310 19:33:06.433712 1 recorder.go:70] Recording config/pod/openshift-dns/dns-default-ds46j with fingerprint=8dddec533ec5c9aed17be838ce9cfdd67dfb956020c059c40e6574f494f320f4 I0310 19:33:06.433841 1 recorder.go:70] Recording config/pod/openshift-dns/dns-default-kdxgn with fingerprint=8cc36ccb7f3a7b5c84645a015131329383c9900652becc16f7578d9fb8f49a93 I0310 19:33:06.433953 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-6587b4c985-8tv9m with fingerprint=3da0e30ca94c396f1384e6d3a3894dc002d23cee13aba8909e456c0ff53a2e4c I0310 19:33:06.434044 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-6c8fd6cf54-gzp5v with fingerprint=eb646fc92d43fa83ba936e5ccf3c8d060d4618fcfd304352c72ce28eb110a120 I0310 19:33:06.434133 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-6c8fd6cf54-wht99 with fingerprint=44cff8727ae6e4cfc5ccff70163b58694d9ce60c37ba9de79427350802a0929c I0310 19:33:06.434188 1 recorder.go:70] Recording config/pod/openshift-ingress-canary/ingress-canary-m9x8d with fingerprint=22eaad2681093e96e15c21eb6ce5ba9b3860d11f548c9f2fc8e0ac7214bea3bc I0310 19:33:06.434242 1 recorder.go:70] Recording config/pod/openshift-ingress-canary/ingress-canary-mx94p with fingerprint=ab325cf136ca5b23e473bfa1af7c65a8915f08a82bdda95c8024d95b12cf1f2d I0310 19:33:06.434304 1 recorder.go:70] Recording config/pod/openshift-ingress-canary/ingress-canary-nsdv7 with fingerprint=be426e1012cdbb165f568157f19dadaf9dd2cee03b7573c3306494d84227d0bb I0310 19:33:06.434314 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.678626407s to process 14 records W0310 19:33:06.776689 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0310 19:33:06.776718 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0310 19:33:06.776753 1 tasks_processing.go:74] worker 17 stopped. E0310 19:33:06.776766 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0310 19:33:06.776782 1 recorder.go:70] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0310 19:33:06.776803 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0310 19:33:06.776821 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.032314533s to process 1 records I0310 19:33:14.080327 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:33:14.416642 1 tasks_processing.go:74] worker 27 stopped. I0310 19:33:14.416692 1 recorder.go:70] Recording config/installplans with fingerprint=5f64cb901ee13f1d49fc4a78ffb6d526a92f22b5daf9b44b941f5370d837c9f9 I0310 19:33:14.416708 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.662454147s to process 1 records I0310 19:33:14.937533 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:33:14.954467 1 tasks_processing.go:74] worker 29 stopped. I0310 19:33:14.954751 1 recorder.go:70] Recording config/serviceaccounts with fingerprint=6a25405fe5ae5697551c08cd61c8ee577eba432556fcc5dc2605b343adedffa8 I0310 19:33:14.954771 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.20843396s to process 1 records E0310 19:33:14.954830 1 periodic.go:250] "Unhandled Error" err="clusterconfig failed after 13.21s with: function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"machine_configs\" failed with an error, function \"support_secret\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0310 19:33:14.955938 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machine_configs" failed with an error, function "support_secret" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0310 19:33:14.955954 1 periodic.go:212] Running workloads gatherer I0310 19:33:14.955972 1 tasks_processing.go:45] number of workers: 2 I0310 19:33:14.955980 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:33:14.955984 1 tasks_processing.go:71] worker 1 working on workload_info task. I0310 19:33:14.955988 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:33:14.956058 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0310 19:33:14.983111 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0310 19:33:14.994200 1 gather_workloads_info.go:387] No image sha256:bae7a33f8db8a3d4b3c4c05498aba85a0ce463f85322067c48e79663710e616e (12ms) I0310 19:33:14.996530 1 tasks_processing.go:74] worker 0 stopped. I0310 19:33:14.996546 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 40.45637ms to process 0 records I0310 19:33:15.005917 1 gather_workloads_info.go:387] No image sha256:a7c71f3c9714cf1717c45e12ec817be6d6209f00989e6a31d634e527f0c8147d (12ms) I0310 19:33:15.017816 1 gather_workloads_info.go:387] No image sha256:6e67c980b7300e769fb4b2adaaf006d0f8274e43b10586701b204ea5153f15fc (12ms) I0310 19:33:15.028835 1 gather_workloads_info.go:387] No image sha256:373614619b9420b110d1508c5f17e066ce69c4c226fd04d02fbb959d9ba41eb6 (11ms) I0310 19:33:15.040156 1 gather_workloads_info.go:387] No image sha256:824a3b8f78e19aa21a9f6444aefe8e0b624886cca18fe828b13acbab55e6e868 (11ms) I0310 19:33:15.051849 1 gather_workloads_info.go:387] No image sha256:9697bc2258bdfa9ae8c1866cc7eb0b3b46851998db827b121bdf77417a881eb3 (12ms) I0310 19:33:15.066953 1 gather_workloads_info.go:387] No image sha256:6d6ebff54b8adac74f4d1b12ac8aa0f16cd7b28370e0d6aa847d1c457e03a5b6 (15ms) I0310 19:33:15.078690 1 gather_workloads_info.go:387] No image sha256:420c12f61c53e54eab2d99476759c0de339fca98ce0a3a782bc6545cc0e97a9c (12ms) I0310 19:33:15.089971 1 gather_workloads_info.go:387] No image sha256:87efe06afd1f04426fca7f86c0f74c4ee75c311ba199ddabd5b849b877bc59fa (11ms) I0310 19:33:15.102800 1 gather_workloads_info.go:387] No image sha256:ceab8a8c340801fcebf800620d3fef5493d8a443077a21c6e56984051bd3abde (13ms) I0310 19:33:15.114510 1 gather_workloads_info.go:387] No image sha256:2d077e73dd76873ee2a1583aa3b3e76a1b408737f0af2e22b2aa055604d89e81 (12ms) I0310 19:33:15.138331 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:33:15.194559 1 gather_workloads_info.go:387] No image sha256:3d0634fe58641d5242649c44ddba70ed67fdd6d2dcb5c2261df5cee8b33de9fd (80ms) I0310 19:33:15.295404 1 gather_workloads_info.go:387] No image sha256:8ef4bdf07dda423fab73484dbc66d527ce8ba8cc2d4f99210bc1bc24ea08c0cb (101ms) I0310 19:33:15.394768 1 gather_workloads_info.go:387] No image sha256:14703f73bb1ca69ea03b726e340b3b68f8e294e1f22c80d1a16666ea3d4a88a3 (99ms) I0310 19:33:15.495240 1 gather_workloads_info.go:387] No image sha256:48404bd61d05dc738ccce4d22e36b30dbcdc6015b06b4afb604fd1baeee35bf2 (100ms) I0310 19:33:15.594677 1 gather_workloads_info.go:387] No image sha256:7941b5d9a758b8667e99cd9236f7f96ec036af61df5cbc96e16077d36700d7c7 (99ms) I0310 19:33:15.694861 1 gather_workloads_info.go:387] No image sha256:0a8a0473029ace3adbb66f490dbe560f0e7782a38cdf32c1f7dd3e092e1d191e (100ms) I0310 19:33:15.795250 1 gather_workloads_info.go:387] No image sha256:ccdda82b4adfa8ee4a84b3bad19693be6a42a9881a7b83788d835912129c49a4 (100ms) I0310 19:33:15.894208 1 gather_workloads_info.go:387] No image sha256:a9e885d6f0456a2cc10f9e5da71fe9403f5e4c639b9b7ea15bd403b272ccb824 (99ms) I0310 19:33:15.994384 1 gather_workloads_info.go:387] No image sha256:752be69b2262be713df12c47f4bac8c2dafed272c401e9a89f8060f053d68054 (100ms) I0310 19:33:16.094753 1 gather_workloads_info.go:387] No image sha256:d28734effaadd66434e77a1a5fbe2e8a4ca2066cd9f8868c22ade9475539bfd7 (100ms) I0310 19:33:16.194782 1 gather_workloads_info.go:387] No image sha256:4f175ee49f51dc4379e9993fecd1657c7a9c4c3dc096772cb198d0212b9eea47 (100ms) I0310 19:33:16.294968 1 gather_workloads_info.go:387] No image sha256:8b1e32ce43eb1247b0558230b6e0baa85ac62f1a5e2a2089d40b4fea90538529 (100ms) I0310 19:33:16.394834 1 gather_workloads_info.go:387] No image sha256:23c4fe84047a4e6cbe7f75f470e7d6fe0e61e7910dd17945b1b61bc4b72f3f2a (100ms) I0310 19:33:16.495221 1 gather_workloads_info.go:387] No image sha256:944d9261ba7a143131fe8267c172defc5f37acc1cea3d4d373ec6fc5d8bfcc31 (100ms) I0310 19:33:16.593968 1 gather_workloads_info.go:387] No image sha256:f2c46054a8f64a0e949cbf30295f31b4a35a0203c6ad03fa7ec922b4101dcbf8 (99ms) I0310 19:33:16.694867 1 gather_workloads_info.go:387] No image sha256:76f820bd9bd138d29305d545d7d49bfe63e2923f1fb1ec2e8eac81a388359024 (101ms) I0310 19:33:16.795233 1 gather_workloads_info.go:387] No image sha256:0a92dd43975f972e3dc707fb37854e046813531553038147e9c90d54a8d9df73 (100ms) I0310 19:33:16.893939 1 gather_workloads_info.go:387] No image sha256:18956d5fbc8d0369a26cb1e00eb5c3aff1973ae8cbd3983920a0cf2851f05358 (99ms) I0310 19:33:16.995228 1 gather_workloads_info.go:387] No image sha256:2fac754deaeade3456361eed52e344318ff16d04819384432759f0ea35530114 (101ms) I0310 19:33:17.096407 1 gather_workloads_info.go:387] No image sha256:c69559260f8b618abc6561da7b0327a2da59e6c09d27f249a14c4b2733ed0384 (101ms) I0310 19:33:17.096435 1 tasks_processing.go:74] worker 1 stopped. E0310 19:33:17.096445 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0310 19:33:17.096700 1 recorder.go:70] Recording config/workload_info with fingerprint=e5992cd613a4160e2d098358b26eff9fd8bc9f13e849cb39ca7e3cbd1744e7ef I0310 19:33:17.096721 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.140444271s to process 1 records E0310 19:33:17.096765 1 periodic.go:250] "Unhandled Error" err="workloads failed after 2.14s with: function \"workload_info\" failed with an error" I0310 19:33:17.097869 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0310 19:33:17.097881 1 periodic.go:212] Running conditional gatherer I0310 19:33:17.107977 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules I0310 19:33:17.114441 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.9:42717->172.30.0.10:53: read: connection refused E0310 19:33:17.114716 1 conditional_gatherer.go:320] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:33:17.114787 1 conditional_gatherer.go:382] updating version cache for conditional gatherer I0310 19:33:17.125350 1 conditional_gatherer.go:390] cluster version is '4.19.9' E0310 19:33:17.125364 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125369 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125372 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125376 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125379 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125382 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125385 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125388 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:33:17.125392 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing I0310 19:33:17.125407 1 tasks_processing.go:45] number of workers: 3 I0310 19:33:17.125418 1 tasks_processing.go:69] worker 2 listening for tasks. I0310 19:33:17.125423 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0310 19:33:17.125435 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:33:17.125447 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:33:17.125459 1 tasks_processing.go:74] worker 1 stopped. I0310 19:33:17.125473 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0310 19:33:17.125470 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0310 19:33:17.125522 1 recorder.go:70] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0310 19:33:17.125539 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 948ns to process 1 records I0310 19:33:17.125572 1 recorder.go:70] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0310 19:33:17.125582 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.12µs to process 1 records I0310 19:33:17.125588 1 tasks_processing.go:74] worker 0 stopped. I0310 19:33:17.125766 1 tasks_processing.go:74] worker 2 stopped. I0310 19:33:17.125779 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 284.06µs to process 0 records I0310 19:33:17.125800 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.129.0.9:42717->172.30.0.10:53: read: connection refused I0310 19:33:17.125817 1 recorder.go:70] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0310 19:33:17.152331 1 recorder.go:70] Recording insights-operator/gathers with fingerprint=961c872be37530e9692b813291204d290ce2cefb978a8209de172d0059731a9f I0310 19:33:17.152523 1 diskrecorder.go:70] Writing 109 records to /var/lib/insights-operator/insights-2026-03-10-193317.tar.gz I0310 19:33:17.160398 1 diskrecorder.go:51] Wrote 109 records to disk in 7ms I0310 19:33:17.160428 1 periodic.go:281] Gathering cluster info every 2h0m0s I0310 19:33:17.160446 1 periodic.go:282] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0310 19:33:24.418342 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:34:31.745347 1 diskrecorder.go:223] Found files to send: insights-2026-03-10-193317.tar.gz I0310 19:34:31.745376 1 insightsuploader.go:150] Checking archives to upload periodically every 15m30.29787461s I0310 19:34:31.745385 1 insightsuploader.go:165] Uploading latest report since 0001-01-01T00:00:00Z I0310 19:34:31.761317 1 requests.go:46] Uploading application/vnd.redhat.openshift.periodic to https://console.redhat.com/api/ingress/v1/upload I0310 19:34:32.084197 1 requests.go:87] Successfully reported id=2026-03-10T19:34:31Z x-rh-insights-request-id=b25a45386a624ae8b1bf92e3cca8d492, wrote=55905 I0310 19:34:32.084262 1 insightsuploader.go:187] Uploaded report successfully in 338.866714ms I0310 19:34:32.084291 1 controller.go:119] Initializing last reported time to 2026-03-10T19:34:31Z I0310 19:34:32.084309 1 insightsreport.go:304] Archive uploaded, starting pulling report... I0310 19:34:32.084320 1 insightsreport.go:215] Starting retrieving report from Smart Proxy I0310 19:34:32.084333 1 insightsreport.go:221] Initial delay for pulling: 1m0s I0310 19:34:32.092283 1 controller.go:481] The operator is healthy I0310 19:34:36.375445 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="caee657a2a7aab8b7bccd4271bb6a01ac3439651798963db8bed9e63f796914e") W0310 19:34:36.375487 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0310 19:34:36.375551 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="949daac519394d6141900ca6dcf3244c669ca425b135499e2caa1d6121c7aa1f") I0310 19:34:36.375605 1 base_controller.go:181] Shutting down ConfigController ... I0310 19:34:36.375714 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="e97aea10e687bb9a27752558fbb9ff62c480bbdb8b9473b96ccb93d9a00e411c") I0310 19:34:36.375613 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0310 19:34:36.375573 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0310 19:34:36.375622 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector E0310 19:34:36.375639 1 controller.go:290] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled I0310 19:34:36.375956 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0310 19:34:36.375986 1 secure_serving.go:258] Stopped listening on [::]:8443 I0310 19:34:36.375652 1 base_controller.go:181] Shutting down LoggingSyncer ... I0310 19:34:36.375667 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0310 19:34:36.375671 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0310 19:34:36.376061 1 base_controller.go:113] All LoggingSyncer workers have been terminated