W0310 19:28:13.983482 1 cmd.go:257] Using insecure, self-signed certificates I0310 19:28:14.496020 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:14.496326 1 observer_polling.go:159] Starting file observer W0310 19:28:14.521431 1 builder.go:272] unable to get owner reference (falling back to namespace): replicasets.apps "insights-operator-5ff5cb4f99" is forbidden: User "system:serviceaccount:openshift-insights:operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-insights" I0310 19:28:14.817354 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0310 19:28:14.817574 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0310 19:28:14.818165 1 secure_serving.go:57] Forcing use of http/1.1 only I0310 19:28:14.818175 1 simple_featuregate_reader.go:171] Starting feature-gate-detector W0310 19:28:14.818191 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0310 19:28:14.818198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0310 19:28:14.818204 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0310 19:28:14.818208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0310 19:28:14.818213 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0310 19:28:14.818217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0310 19:28:14.823075 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CPMSMachineNamePrefix ChunkSizeMiB CloudDualStackNodeIPs ConsolePluginContentSecurityPolicy DisableKubeletCloudCredentialProviders ExternalOIDC GCPLabelsTags GatewayAPI GatewayAPIController HardwareSpeed IngressControllerLBSubnetsAWS KMSv1 ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles MultiArchInstallAWS MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NodeDisruptionPolicy OnClusterBuild PersistentIPsForVirtualization PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController VSphereDriverConfiguration VSphereMultiVCenters ValidatingAdmissionPolicy AWSClusterHostedDNS AutomatedEtcdBackup BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalOIDCWithUIDAndExtraClaimMappings GCPClusterHostedDNS GCPCustomAPIEndpoints HighlyAvailableArbiter ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InsightsRuntimeExtractor KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PinnedImages PlatformOperators ProcMountType RouteAdvertisements SELinuxChangePolicy SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerification SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass VolumeGroupSnapshot] I0310 19:28:14.823142 1 event.go:377] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-insights", Name:"openshift-insights", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ConsolePluginContentSecurityPolicy", "DisableKubeletCloudCredentialProviders", "ExternalOIDC", "GCPLabelsTags", "GatewayAPI", "GatewayAPIController", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "VSphereDriverConfiguration", "VSphereMultiVCenters", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AutomatedEtcdBackup", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GCPCustomAPIEndpoints", "HighlyAvailableArbiter", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "SELinuxChangePolicy", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0310 19:28:14.825061 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0310 19:28:14.825080 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0310 19:28:14.825090 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0310 19:28:14.825123 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0310 19:28:14.825135 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0310 19:28:14.825202 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0310 19:28:14.825380 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-3228657374/tls.crt::/tmp/serving-cert-3228657374/tls.key" I0310 19:28:14.825556 1 secure_serving.go:213] Serving securely on [::]:8443 I0310 19:28:14.825587 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" E0310 19:28:14.827966 1 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:serviceaccount:openshift-insights:operator\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-insights\"" event="&Event{ObjectMeta:{openshift-insights.189b9188bffd65be openshift-insights 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Namespace,Namespace:openshift-insights,Name:openshift-insights,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{\"AWSEFSDriverVolumeMetrics\", \"AdditionalRoutingCapabilities\", \"AdminNetworkPolicy\", \"AlibabaPlatform\", \"AzureWorkloadIdentity\", \"BareMetalLoadBalancer\", \"BuildCSIVolumes\", \"CPMSMachineNamePrefix\", \"ChunkSizeMiB\", \"CloudDualStackNodeIPs\", \"ConsolePluginContentSecurityPolicy\", \"DisableKubeletCloudCredentialProviders\", \"ExternalOIDC\", \"GCPLabelsTags\", \"GatewayAPI\", \"GatewayAPIController\", \"HardwareSpeed\", \"IngressControllerLBSubnetsAWS\", \"KMSv1\", \"ManagedBootImages\", \"ManagedBootImagesAWS\", \"MetricsCollectionProfiles\", \"MultiArchInstallAWS\", \"MultiArchInstallGCP\", \"NetworkDiagnosticsConfig\", \"NetworkLiveMigration\", \"NetworkSegmentation\", \"NodeDisruptionPolicy\", \"OnClusterBuild\", \"PersistentIPsForVirtualization\", \"PrivateHostedZoneAWS\", \"RouteExternalCertificate\", \"ServiceAccountTokenNodeBinding\", \"SetEIPForNLBIngressController\", \"VSphereDriverConfiguration\", \"VSphereMultiVCenters\", \"ValidatingAdmissionPolicy\"}, Disabled:[]v1.FeatureGateName{\"AWSClusterHostedDNS\", \"AutomatedEtcdBackup\", \"BootcNodeManagement\", \"ClusterAPIInstall\", \"ClusterAPIInstallIBMCloud\", \"ClusterMonitoringConfig\", \"ClusterVersionOperatorConfiguration\", \"DNSNameResolver\", \"DualReplica\", \"DyanmicServiceEndpointIBMCloud\", \"DynamicResourceAllocation\", \"EtcdBackendQuota\", \"EventedPLEG\", \"Example\", \"Example2\", \"ExternalOIDCWithUIDAndExtraClaimMappings\", \"GCPClusterHostedDNS\", \"GCPCustomAPIEndpoints\", \"HighlyAvailableArbiter\", \"ImageStreamImportMode\", \"IngressControllerDynamicConfigurationManager\", \"InsightsConfig\", \"InsightsConfigAPI\", \"InsightsOnDemandDataGather\", \"InsightsRuntimeExtractor\", \"KMSEncryptionProvider\", \"MachineAPIMigration\", \"MachineAPIOperatorDisableMachineHealthCheckController\", \"MachineAPIProviderOpenStack\", \"MachineConfigNodes\", \"MaxUnavailableStatefulSet\", \"MinimumKubeletVersion\", \"MixedCPUsAllocation\", \"MultiArchInstallAzure\", \"NewOLM\", \"NewOLMCatalogdAPIV1Metas\", \"NewOLMOwnSingleNamespace\", \"NewOLMPreflightPermissionChecks\", \"NodeSwap\", \"NutanixMultiSubnets\", \"OVNObservability\", \"OpenShiftPodSecurityAdmission\", \"PinnedImages\", \"PlatformOperators\", \"ProcMountType\", \"RouteAdvertisements\", \"SELinuxChangePolicy\", \"SELinuxMount\", \"ShortCertRotation\", \"SignatureStores\", \"SigstoreImageVerification\", \"SigstoreImageVerificationPKI\", \"TranslateStreamCloseWebsocketRequests\", \"UpgradeStatus\", \"UserNamespacesPodSecurityStandards\", \"UserNamespacesSupport\", \"VSphereConfigurableMaxAllowedBlockVolumesPerNode\", \"VSphereHostVMGroupZonal\", \"VSphereMultiDisk\", \"VSphereMultiNetworks\", \"VolumeAttributesClass\", \"VolumeGroupSnapshot\"}},Source:EventSource{Component:openshift-insights-operator,Host:,},FirstTimestamp:2026-03-10 19:28:14.82305683 +0000 UTC m=+0.876408952,LastTimestamp:2026-03-10 19:28:14.82305683 +0000 UTC m=+0.876408952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-insights-operator,ReportingInstance:,}" W0310 19:28:14.829951 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0310 19:28:14.829977 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0310 19:28:14.830076 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0310 19:28:14.836361 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0310 19:28:14.836382 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0310 19:28:14.843995 1 secretconfigobserver.go:119] support secret does not exist I0310 19:28:14.849174 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0310 19:28:14.854009 1 secretconfigobserver.go:119] support secret does not exist I0310 19:28:14.856393 1 recorder.go:156] Pruning old reports every 7h41m37s, max age is 288h0m0s I0310 19:28:14.861973 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0310 19:28:14.861989 1 periodic.go:212] Running clusterconfig gatherer I0310 19:28:14.862002 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0310 19:28:14.862008 1 insightsreport.go:296] Starting report retriever I0310 19:28:14.862012 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0310 19:28:14.862035 1 tasks_processing.go:45] number of workers: 32 I0310 19:28:14.861990 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0310 19:28:14.862066 1 tasks_processing.go:69] worker 31 listening for tasks. I0310 19:28:14.862074 1 tasks_processing.go:71] worker 31 working on openstack_version task. I0310 19:28:14.862081 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:28:14.862085 1 tasks_processing.go:69] worker 23 listening for tasks. I0310 19:28:14.862091 1 tasks_processing.go:69] worker 2 listening for tasks. I0310 19:28:14.862093 1 tasks_processing.go:69] worker 15 listening for tasks. I0310 19:28:14.862099 1 tasks_processing.go:69] worker 16 listening for tasks. I0310 19:28:14.862099 1 tasks_processing.go:69] worker 9 listening for tasks. I0310 19:28:14.862099 1 tasks_processing.go:69] worker 8 listening for tasks. I0310 19:28:14.862106 1 tasks_processing.go:69] worker 10 listening for tasks. I0310 19:28:14.862107 1 tasks_processing.go:69] worker 6 listening for tasks. I0310 19:28:14.862112 1 tasks_processing.go:69] worker 17 listening for tasks. I0310 19:28:14.862115 1 tasks_processing.go:69] worker 7 listening for tasks. I0310 19:28:14.862120 1 tasks_processing.go:69] worker 18 listening for tasks. I0310 19:28:14.862121 1 tasks_processing.go:69] worker 11 listening for tasks. I0310 19:28:14.862123 1 tasks_processing.go:69] worker 5 listening for tasks. I0310 19:28:14.862129 1 tasks_processing.go:69] worker 21 listening for tasks. I0310 19:28:14.862125 1 tasks_processing.go:69] worker 20 listening for tasks. I0310 19:28:14.862132 1 tasks_processing.go:69] worker 19 listening for tasks. I0310 19:28:14.862131 1 tasks_processing.go:69] worker 12 listening for tasks. I0310 19:28:14.862137 1 tasks_processing.go:69] worker 13 listening for tasks. I0310 19:28:14.862142 1 tasks_processing.go:69] worker 4 listening for tasks. I0310 19:28:14.862160 1 tasks_processing.go:69] worker 28 listening for tasks. I0310 19:28:14.862138 1 tasks_processing.go:69] worker 22 listening for tasks. I0310 19:28:14.862068 1 tasks_processing.go:69] worker 14 listening for tasks. I0310 19:28:14.862170 1 tasks_processing.go:69] worker 26 listening for tasks. I0310 19:28:14.862170 1 tasks_processing.go:69] worker 24 listening for tasks. I0310 19:28:14.862171 1 tasks_processing.go:69] worker 29 listening for tasks. I0310 19:28:14.862178 1 tasks_processing.go:69] worker 27 listening for tasks. I0310 19:28:14.862179 1 tasks_processing.go:71] worker 24 working on machine_healthchecks task. I0310 19:28:14.862177 1 tasks_processing.go:71] worker 1 working on tsdb_status task. I0310 19:28:14.862186 1 tasks_processing.go:71] worker 27 working on image_pruners task. I0310 19:28:14.862191 1 tasks_processing.go:71] worker 15 working on machine_config_pools task. I0310 19:28:14.862195 1 tasks_processing.go:71] worker 21 working on version task. W0310 19:28:14.862216 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:14.862235 1 tasks_processing.go:71] worker 1 working on feature_gates task. I0310 19:28:14.862274 1 tasks_processing.go:71] worker 20 working on ceph_cluster task. I0310 19:28:14.862074 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:28:14.862173 1 tasks_processing.go:71] worker 14 working on sap_config task. I0310 19:28:14.862163 1 tasks_processing.go:69] worker 25 listening for tasks. I0310 19:28:14.862181 1 tasks_processing.go:71] worker 29 working on machine_sets task. I0310 19:28:14.862470 1 tasks_processing.go:71] worker 19 working on validating_webhook_configurations task. I0310 19:28:14.862525 1 tasks_processing.go:71] worker 16 working on overlapping_namespace_uids task. I0310 19:28:14.862164 1 tasks_processing.go:69] worker 30 listening for tasks. I0310 19:28:14.862182 1 tasks_processing.go:71] worker 18 working on image task. I0310 19:28:14.862179 1 tasks_processing.go:71] worker 7 working on pdbs task. I0310 19:28:14.862731 1 tasks_processing.go:71] worker 26 working on clusterroles task. I0310 19:28:14.862793 1 tasks_processing.go:69] worker 3 listening for tasks. I0310 19:28:14.863133 1 tasks_processing.go:71] worker 30 working on storage_classes task. I0310 19:28:14.863218 1 tasks_processing.go:71] worker 3 working on machines task. I0310 19:28:14.863297 1 tasks_processing.go:71] worker 6 working on container_images task. I0310 19:28:14.862182 1 tasks_processing.go:71] worker 23 working on active_alerts task. I0310 19:28:14.863656 1 tasks_processing.go:71] worker 17 working on image_registries task. W0310 19:28:14.863711 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:14.862191 1 tasks_processing.go:71] worker 5 working on openshift_machine_api_events task. I0310 19:28:14.863849 1 tasks_processing.go:71] worker 8 working on cost_management_metrics_configs task. I0310 19:28:14.863887 1 tasks_processing.go:71] worker 10 working on service_accounts task. I0310 19:28:14.862187 1 tasks_processing.go:71] worker 11 working on install_plans task. I0310 19:28:14.862172 1 tasks_processing.go:71] worker 22 working on oauths task. I0310 19:28:14.862968 1 tasks_processing.go:71] worker 12 working on metrics task. I0310 19:28:14.863043 1 tasks_processing.go:71] worker 13 working on nodes task. I0310 19:28:14.864101 1 tasks_processing.go:71] worker 25 working on certificate_signing_requests task. I0310 19:28:14.862187 1 tasks_processing.go:71] worker 2 working on ingress_certificates task. W0310 19:28:14.864896 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:14.863118 1 tasks_processing.go:71] worker 4 working on storage_cluster task. I0310 19:28:14.863122 1 tasks_processing.go:71] worker 28 working on proxies task. I0310 19:28:14.863981 1 tasks_processing.go:71] worker 0 working on container_runtime_configs task. I0310 19:28:14.863121 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 44.132µs to process 0 records I0310 19:28:14.865564 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 192.573µs to process 0 records I0310 19:28:14.865577 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 69.109µs to process 0 records I0310 19:28:14.865587 1 tasks_processing.go:71] worker 12 working on openstack_controlplanes task. I0310 19:28:14.863992 1 tasks_processing.go:71] worker 9 working on dvo_metrics task. I0310 19:28:14.865903 1 tasks_processing.go:71] worker 23 working on nodenetworkstates task. I0310 19:28:14.866848 1 tasks_processing.go:71] worker 31 working on sap_datahubs task. I0310 19:28:14.866856 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 4.764173ms to process 0 records I0310 19:28:14.866867 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 4.651568ms to process 0 records I0310 19:28:14.866876 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 4.46632ms to process 0 records I0310 19:28:14.866882 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 4.581035ms to process 0 records E0310 19:28:14.866922 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0310 19:28:14.866940 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 4.727743ms to process 0 records I0310 19:28:14.866949 1 tasks_processing.go:71] worker 15 working on machine_autoscalers task. I0310 19:28:14.866949 1 tasks_processing.go:71] worker 24 working on lokistack task. I0310 19:28:14.866994 1 tasks_processing.go:71] worker 14 working on silenced_alerts task. I0310 19:28:14.867004 1 tasks_processing.go:71] worker 20 working on openshift_logging task. W0310 19:28:14.867025 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:14.867038 1 tasks_processing.go:71] worker 14 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0310 19:28:14.867062 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 27.197µs to process 0 records I0310 19:28:14.868779 1 tasks_processing.go:71] worker 29 working on support_secret task. I0310 19:28:14.868789 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 6.351106ms to process 0 records I0310 19:28:14.869304 1 tasks_processing.go:71] worker 3 working on nodenetworkconfigurationpolicies task. E0310 19:28:14.869312 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0310 19:28:14.869321 1 gather.go:177] gatherer "clusterconfig" function "machines" took 6.028544ms to process 0 records I0310 19:28:14.872902 1 tasks_processing.go:71] worker 8 working on openstack_dataplanedeployments task. I0310 19:28:14.872949 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 9.030771ms to process 0 records I0310 19:28:14.872964 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 6.079959ms to process 0 records I0310 19:28:14.872974 1 tasks_processing.go:71] worker 31 working on monitoring_persistent_volumes task. I0310 19:28:14.873287 1 tasks_processing.go:71] worker 23 working on jaegers task. I0310 19:28:14.873292 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 7.329796ms to process 0 records I0310 19:28:14.873305 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 7.742294ms to process 0 records I0310 19:28:14.873313 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 6.287244ms to process 0 records I0310 19:28:14.873318 1 tasks_processing.go:71] worker 0 working on config_maps task. I0310 19:28:14.873387 1 tasks_processing.go:71] worker 20 working on mutating_webhook_configurations task. I0310 19:28:14.873390 1 tasks_processing.go:71] worker 4 working on machine_configs task. I0310 19:28:14.873320 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 8.302358ms to process 0 records I0310 19:28:14.875689 1 tasks_processing.go:71] worker 24 working on ingress task. I0310 19:28:14.875699 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 8.721761ms to process 0 records I0310 19:28:14.875708 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 6.378189ms to process 0 records I0310 19:28:14.875718 1 tasks_processing.go:71] worker 3 working on node_logs task. I0310 19:28:14.876266 1 tasks_processing.go:71] worker 12 working on olm_operators task. I0310 19:28:14.876308 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 10.669082ms to process 0 records I0310 19:28:14.876324 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 9.341871ms to process 0 records I0310 19:28:14.876330 1 tasks_processing.go:71] worker 15 working on networks task. I0310 19:28:14.876500 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0310 19:28:14.876555 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0310 19:28:14.876564 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0310 19:28:14.876569 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0310 19:28:14.876587 1 controller.go:458] The operator is still being initialized I0310 19:28:14.876594 1 controller.go:481] The operator is healthy I0310 19:28:14.880986 1 tasks_processing.go:71] worker 18 working on cluster_apiserver task. I0310 19:28:14.881168 1 recorder.go:70] Recording config/image with fingerprint=1430ab4eb080389f16bac8ad0c91aa0fc00fe561196d5d906dc0fc49b619ae2b I0310 19:28:14.881184 1 gather.go:177] gatherer "clusterconfig" function "image" took 18.318467ms to process 1 records I0310 19:28:14.881210 1 tasks_processing.go:71] worker 1 working on authentication task. I0310 19:28:14.881331 1 recorder.go:70] Recording config/featuregate with fingerprint=72f0d57fe44ff77193fffea56dd4cc130b31052a016583b49b13900b73db76cf I0310 19:28:14.881340 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 18.963848ms to process 1 records I0310 19:28:14.881454 1 tasks_processing.go:71] worker 27 working on sap_pods task. I0310 19:28:14.881806 1 recorder.go:70] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=3a9c7d1b110dd9fb2c866e75a42fd74964eb4119d45131f5bfb44595bd45a27d I0310 19:28:14.881818 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 19.252894ms to process 1 records I0310 19:28:14.888113 1 tasks_processing.go:71] worker 29 working on operators task. E0310 19:28:14.888120 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0310 19:28:14.888128 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 19.318225ms to process 0 records I0310 19:28:14.889286 1 tasks_processing.go:71] worker 23 working on schedulers task. I0310 19:28:14.889290 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 15.982383ms to process 0 records I0310 19:28:14.889499 1 tasks_processing.go:71] worker 5 working on pod_network_connectivity_checks task. I0310 19:28:14.889559 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 25.725607ms to process 0 records I0310 19:28:14.889884 1 tasks_processing.go:71] worker 28 working on crds task. I0310 19:28:14.889962 1 recorder.go:70] Recording config/proxy with fingerprint=d949174cf301c7a2fea6e8f892127730a450588154be212c627b40b098a55b83 I0310 19:28:14.889980 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 24.688129ms to process 1 records I0310 19:28:14.890108 1 tasks_processing.go:71] worker 22 working on operators_pods_and_events task. I0310 19:28:14.890284 1 recorder.go:70] Recording config/oauth with fingerprint=92deb5ff46805081f7730a1d839837ab13e7bf25c275c0bd5a60648bfe3d2032 I0310 19:28:14.890300 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 25.707277ms to process 1 records I0310 19:28:14.890310 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 17.057583ms to process 0 records I0310 19:28:14.890316 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 17.199791ms to process 0 records I0310 19:28:14.890338 1 tasks_processing.go:71] worker 31 working on infrastructures task. I0310 19:28:14.890374 1 tasks_processing.go:71] worker 8 working on openstack_dataplanenodesets task. I0310 19:28:14.890414 1 tasks_processing.go:71] worker 17 working on aggregated_monitoring_cr_names task. I0310 19:28:14.890850 1 recorder.go:70] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=995c29547f89e3fa506f92e4ac2d58069fe7447625167f6ce44983328740f609 I0310 19:28:14.890863 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 26.588045ms to process 1 records I0310 19:28:14.890998 1 tasks_processing.go:74] worker 3 stopped. I0310 19:28:14.891014 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 15.270809ms to process 0 records I0310 19:28:14.891140 1 tasks_processing.go:74] worker 30 stopped. I0310 19:28:14.891317 1 recorder.go:70] Recording config/storage/storageclasses/gp2-csi with fingerprint=9003bdf5e3aae7e093e77334fde596313007fbc5f4bf4f80c2d5e70bb0a9c811 I0310 19:28:14.891353 1 recorder.go:70] Recording config/storage/storageclasses/gp3-csi with fingerprint=c2fa15a9b069226dd18be5b2995d5d4ff81235dda4d73806c46cdf8941d2d372 I0310 19:28:14.891366 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 27.914592ms to process 2 records I0310 19:28:14.891508 1 recorder.go:70] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=e11c17b46134173954bed2d33b7f2c560fe910435f8cfab2b551ed479ba4fc90 I0310 19:28:14.891523 1 tasks_processing.go:74] worker 7 stopped. I0310 19:28:14.891548 1 recorder.go:70] Recording config/pdbs/openshift-ingress/router-default with fingerprint=ac8d67bf646db2889eeb8657e3fe7198d89a4daeee353e16f98762f212f92261 I0310 19:28:14.891577 1 recorder.go:70] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=1cb0346fbcabee0d1d4705025ed94680e14d3dbfa43b2e3554c2c63c3ccb9cf1 I0310 19:28:14.891587 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 28.490708ms to process 3 records I0310 19:28:14.891671 1 tasks_processing.go:74] worker 15 stopped. I0310 19:28:14.891742 1 recorder.go:70] Recording config/network with fingerprint=d9f4fc3e0d6b4cfabde8ece7bf4355e88bf84f087b3b2b7580dfd7a688b52293 I0310 19:28:14.891754 1 gather.go:177] gatherer "clusterconfig" function "networks" took 15.036873ms to process 1 records I0310 19:28:14.891765 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 15.237733ms to process 0 records I0310 19:28:14.891839 1 tasks_processing.go:74] worker 12 stopped. I0310 19:28:14.891896 1 recorder.go:70] Recording config/ingress with fingerprint=97ba00c79fde0d54d0760ed795c43abda0242278c8c6d494e51b1813a31b8997 I0310 19:28:14.891906 1 tasks_processing.go:74] worker 24 stopped. I0310 19:28:14.891907 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 15.995739ms to process 1 records I0310 19:28:14.892392 1 tasks_processing.go:74] worker 27 stopped. I0310 19:28:14.892410 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 10.925143ms to process 0 records I0310 19:28:14.892529 1 tasks_processing.go:74] worker 20 stopped. I0310 19:28:14.892583 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=7acc46632ad1568b6f002d6b484262ed6a7dbb7228f7d783f319b0817588df06 I0310 19:28:14.892907 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=60fda0c134c248de1d5cf3dbe543c1cf9f3af3f7290ce0bc1b62cf007d19cd1c I0310 19:28:14.893015 1 recorder.go:70] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=31bbd1f09189e6d9bbd67f6bc30b5a3cdeab7f719039c0e95c927d82aaab0279 I0310 19:28:14.893091 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 18.995277ms to process 3 records I0310 19:28:14.898241 1 tasks_processing.go:74] worker 19 stopped. I0310 19:28:14.898405 1 recorder.go:70] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=e04d170ca1c756d4ac0acf031e75115e87f8aac2171ed5cbc87b3d27f292321d I0310 19:28:14.898525 1 recorder.go:70] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=36efb7d54fd503a0e46ace425bd1f6c76ca0e2a2a81084883b125cbe6bc877c7 W0310 19:28:14.898556 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:28:14.898561 1 recorder.go:70] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=f28c0b9b07deebcc8aa2613e0618e962b371ae8a46dbef54f9464942b3ca136e I0310 19:28:14.898596 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=9c925dfd3c07c5ba6b770b2945b4bcd42d2ea0c109e7bf17fd9d58dbcd1ce20b I0310 19:28:14.898632 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=1632d482741b9f62257570167c340f1a1b2b1d3f19dc5b7536de29bcf7a1be0f I0310 19:28:14.898669 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=d34463f1c3463182ce641e56c75f712299e4df5a52477b1baf628a1bcff3f6bf I0310 19:28:14.898738 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=97b4bf3c5b64ba4d1f8203924f55d64d391d561795f235db841835c5cbe04350 I0310 19:28:14.898778 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=ca5b87eec8c55e22fb2774f6c4a2357c72e4ea6ef2ef4271d4b456e9a5225d29 I0310 19:28:14.898819 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=b250dff3994b9dc2dcf5c780bcdcaf4c789c384c13aa604f0338d018cb1ccf8a I0310 19:28:14.898863 1 recorder.go:70] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=88cca81487fc79aa446d501264f765cccf525cb14b068532afc0d311b48d0310 I0310 19:28:14.898909 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 35.74449ms to process 10 records I0310 19:28:14.899063 1 tasks_processing.go:74] worker 13 stopped. I0310 19:28:14.899493 1 recorder.go:70] Recording config/node/ip-10-0-0-193.ec2.internal with fingerprint=a082972696e0fb6bdafe6f9cadaa86405247d972deae945a67a2ff8508b4bc14 I0310 19:28:14.899662 1 recorder.go:70] Recording config/node/ip-10-0-1-9.ec2.internal with fingerprint=92d1de84b7afd265ed7ea3d238ee7128a9e4c2ddaf2a42bad845a5591c640055 I0310 19:28:14.899785 1 recorder.go:70] Recording config/node/ip-10-0-2-240.ec2.internal with fingerprint=170be0c209317d7a0dcf00e56b1e7d0bca71fa2f84d163f22c5941d1ff00a55a I0310 19:28:14.899832 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 34.771493ms to process 3 records I0310 19:28:14.899921 1 recorder.go:70] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0310 19:28:14.899970 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 36.869544ms to process 1 records I0310 19:28:14.899971 1 tasks_processing.go:74] worker 16 stopped. I0310 19:28:14.900224 1 recorder.go:70] Recording config/apiserver with fingerprint=9fdd9f9b5ffd4591098204d4f83d9982d141e8ca67d1646b5cb7f549c6c3c9bf I0310 19:28:14.900271 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 18.466586ms to process 1 records W0310 19:28:14.900314 1 operator.go:287] started I0310 19:28:14.900349 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0310 19:28:14.900525 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0310 19:28:14.900579 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0310 19:28:14.900359 1 tasks_processing.go:74] worker 1 stopped. I0310 19:28:14.900364 1 tasks_processing.go:74] worker 18 stopped. I0310 19:28:14.900853 1 recorder.go:70] Recording config/authentication with fingerprint=b1db5c0a95b57d30ebe8e0c82f6dbd37b4406e7f053565d2246cf4127d0867c3 I0310 19:28:14.900870 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 18.272767ms to process 1 records E0310 19:28:14.900883 1 gather.go:140] gatherer "clusterconfig" function "machine_configs" failed with the error: getting MachineConfigPools failed: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io) I0310 19:28:14.900915 1 recorder.go:70] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0310 19:28:14.900924 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 26.284426ms to process 1 records I0310 19:28:14.900929 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 9.353447ms to process 0 records I0310 19:28:14.900952 1 tasks_processing.go:74] worker 4 stopped. I0310 19:28:14.900958 1 tasks_processing.go:74] worker 8 stopped. I0310 19:28:14.900994 1 recorder.go:70] Recording config/schedulers/cluster with fingerprint=8a486d9218921da82e63ed2be6641575d640367490ab6613e2b37c254050e86d I0310 19:28:14.901003 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 10.674295ms to process 1 records E0310 19:28:14.901008 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0310 19:28:14.901015 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 10.873498ms to process 0 records I0310 19:28:14.901023 1 tasks_processing.go:74] worker 5 stopped. I0310 19:28:14.901027 1 tasks_processing.go:74] worker 23 stopped. I0310 19:28:14.901195 1 tasks_processing.go:74] worker 25 stopped. I0310 19:28:14.901210 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 37.021166ms to process 0 records I0310 19:28:14.904872 1 tasks_processing.go:74] worker 31 stopped. I0310 19:28:14.906696 1 recorder.go:70] Recording config/infrastructure with fingerprint=fd13e449ce62971763cefe0264a1d79d264891ac3daf85f9765922d339c51e20 I0310 19:28:14.906771 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 14.518231ms to process 1 records I0310 19:28:14.906973 1 tasks_processing.go:74] worker 26 stopped. I0310 19:28:14.907210 1 recorder.go:70] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=6b372fb33653dd1811b22c6bf21eb12e870a4a981d52d55f7bb6e627ae2fed96 I0310 19:28:14.907421 1 recorder.go:70] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=60cc6b478087d316396453fb01837620f5a22926402b6e1a2ce366617cb8bc47 I0310 19:28:14.907463 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 42.375012ms to process 2 records I0310 19:28:14.913125 1 tasks_processing.go:74] worker 6 stopped. I0310 19:28:14.914557 1 recorder.go:70] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-r6mcz with fingerprint=d87ea65ee810e550a1dc3bf361e1446930efaa96ffed1110ca918849e0029653 I0310 19:28:14.914613 1 recorder.go:70] Recording config/running_containers with fingerprint=efddc068c61ed38726cb8db65f6daade7c5ff8e126f1df7ce83e178e052c5956 I0310 19:28:14.914623 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 49.771529ms to process 2 records I0310 19:28:14.915075 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0310 19:28:14.915092 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0310 19:28:14.915098 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0310 19:28:14.915104 1 controller.go:203] Source scaController *sca.Controller is not ready I0310 19:28:14.915111 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0310 19:28:14.915116 1 tasks_processing.go:74] worker 28 stopped. I0310 19:28:14.915131 1 controller.go:458] The operator is still being initialized I0310 19:28:14.915139 1 controller.go:481] The operator is healthy I0310 19:28:14.915674 1 recorder.go:70] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=c9d8d47c9e52299a42caec3162636fdf2b537c06bf393f82bd9c3b45bfb6bd09 I0310 19:28:14.915873 1 recorder.go:70] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=a46fa29e195fed5c2f9928c2c58153485ed0e6b18863c4b75ffd20507d5a3c4e I0310 19:28:14.915883 1 gather.go:177] gatherer "clusterconfig" function "crds" took 25.203591ms to process 2 records I0310 19:28:14.921262 1 tasks_processing.go:74] worker 14 stopped. I0310 19:28:14.921296 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 54.214768ms to process 0 records I0310 19:28:14.923213 1 prometheus_rules.go:88] Prometheus rules successfully created I0310 19:28:14.925246 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0310 19:28:14.925325 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0310 19:28:14.925347 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0310 19:28:14.925689 1 tasks_processing.go:74] worker 17 stopped. I0310 19:28:14.925709 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 35.258547ms to process 0 records E0310 19:28:14.930038 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27fc164592-9a42-4099-87a7-f60f30de6444%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.9:39114->172.30.0.10:53: read: connection refused I0310 19:28:14.930050 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27fc164592-9a42-4099-87a7-f60f30de6444%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.9:39114->172.30.0.10:53: read: connection refused I0310 19:28:14.931072 1 base_controller.go:82] Caches are synced for ConfigController I0310 19:28:14.931084 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0310 19:28:14.940311 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:28:14.949440 1 tasks_processing.go:74] worker 0 stopped. E0310 19:28:14.949461 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0310 19:28:14.949470 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0310 19:28:14.949474 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0310 19:28:14.949485 1 recorder.go:70] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=295a2bc7810c628251f1f312c209b5eb4da888326706c635ac0c12aec0969e17 I0310 19:28:14.949512 1 recorder.go:70] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0310 19:28:14.949519 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0310 19:28:14.949524 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=fe7abfa7f7aea852d8bca6b7df9b4d5de32045254ff2774f9bd30f0b6dcb7dc4 I0310 19:28:14.949529 1 recorder.go:70] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0310 19:28:14.949567 1 recorder.go:70] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0310 19:28:14.949575 1 recorder.go:70] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0310 19:28:14.949580 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 76.102052ms to process 7 records I0310 19:28:14.984839 1 tasks_processing.go:74] worker 21 stopped. I0310 19:28:14.985367 1 recorder.go:70] Recording config/version with fingerprint=deebde23775550111aa98987ce21500b11ef50d3aeb3c4fc204a3b4706593d95 I0310 19:28:14.985385 1 recorder.go:70] Recording config/id with fingerprint=bf2a29503efa78c70835e626065f513031c0adfca3374385f932be577fbbfa47 I0310 19:28:14.985394 1 gather.go:177] gatherer "clusterconfig" function "version" took 122.632081ms to process 2 records I0310 19:28:14.997286 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload I0310 19:28:15.000803 1 base_controller.go:82] Caches are synced for LoggingSyncer W0310 19:28:15.000811 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.9:35789->172.30.0.10:53: read: connection refused I0310 19:28:15.000816 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0310 19:28:15.000822 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.9:35789->172.30.0.10:53: read: connection refused I0310 19:28:15.014855 1 tasks_processing.go:74] worker 2 stopped. E0310 19:28:15.014875 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0310 19:28:15.014881 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2ov1rmga1lmertbm3075pr3fjmubsbm3-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2ov1rmga1lmertbm3075pr3fjmubsbm3-primary-cert-bundle-secret" not found I0310 19:28:15.014949 1 recorder.go:70] Recording aggregated/ingress_controllers_certs with fingerprint=bacccc99d68b2b635fb907ccbc82870e0941867e83cfda4b7f4340e13c616370 I0310 19:28:15.014966 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 150.250232ms to process 1 records I0310 19:28:15.318517 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0310 19:28:15.318532 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0310 19:28:15.319018 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-d44w8 pod in namespace openshift-dns (previous: false). I0310 19:28:15.544308 1 gather_cluster_operators.go:184] Unable to get operatorpkis.network.operator.openshift.io resource due to: operatorpkis.network.operator.openshift.io "ovn" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "operatorpkis" in API group "network.operator.openshift.io" in the namespace "openshift-ovn-kubernetes" I0310 19:28:15.546002 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-d44w8 pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-d44w8\" is waiting to start: ContainerCreating" I0310 19:28:15.546019 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-d44w8\" is waiting to start: ContainerCreating" I0310 19:28:15.546029 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-d44w8 pod in namespace openshift-dns (previous: false). I0310 19:28:15.725611 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-d44w8 pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-d44w8\" is waiting to start: ContainerCreating" I0310 19:28:15.725628 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-d44w8\" is waiting to start: ContainerCreating" I0310 19:28:15.725639 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-pnwsp pod in namespace openshift-dns (previous: false). I0310 19:28:15.744433 1 gather_cluster_operators.go:184] Unable to get operatorpkis.network.operator.openshift.io resource due to: operatorpkis.network.operator.openshift.io "signer" is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot get resource "operatorpkis" in API group "network.operator.openshift.io" in the namespace "openshift-ovn-kubernetes" W0310 19:28:15.895512 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:28:15.943286 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-pnwsp pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-pnwsp\" is waiting to start: ContainerCreating" I0310 19:28:15.943302 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-pnwsp\" is waiting to start: ContainerCreating" I0310 19:28:15.943310 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-pnwsp pod in namespace openshift-dns (previous: false). I0310 19:28:16.122657 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-pnwsp pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-pnwsp\" is waiting to start: ContainerCreating" I0310 19:28:16.122676 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-pnwsp\" is waiting to start: ContainerCreating" I0310 19:28:16.122686 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-vk6zd pod in namespace openshift-dns (previous: false). I0310 19:28:16.361898 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-vk6zd pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-vk6zd\" is waiting to start: ContainerCreating" I0310 19:28:16.361915 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-vk6zd\" is waiting to start: ContainerCreating" I0310 19:28:16.361923 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-vk6zd pod in namespace openshift-dns (previous: false). I0310 19:28:16.524587 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-vk6zd pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-vk6zd\" is waiting to start: ContainerCreating" I0310 19:28:16.524603 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-vk6zd\" is waiting to start: ContainerCreating" I0310 19:28:16.524613 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-2q4w6 pod in namespace openshift-dns (previous: false). I0310 19:28:16.545847 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0310 19:28:16.723654 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:16.723670 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-b48xs pod in namespace openshift-dns (previous: false). W0310 19:28:16.895566 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:28:16.922932 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:16.922946 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-pn6zc pod in namespace openshift-dns (previous: false). I0310 19:28:16.947404 1 tasks_processing.go:74] worker 29 stopped. I0310 19:28:16.947447 1 recorder.go:70] Recording config/clusteroperator/console with fingerprint=e26202dff7387e68460cdee25da0d5fd53980941d30e78b10d505b4291b4c943 I0310 19:28:16.947473 1 recorder.go:70] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=eaab8574c23cf91563ca8f780d13cf6d14d4722601dec468c5764a3e57d20d1e I0310 19:28:16.947511 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0310 19:28:16.947539 1 recorder.go:70] Recording config/clusteroperator/dns with fingerprint=0b39738645d29a7b262f60802724b7ef7fce43bea2c9cd664daa5fc029cba03c I0310 19:28:16.947560 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0310 19:28:16.947586 1 recorder.go:70] Recording config/clusteroperator/image-registry with fingerprint=66aa2653d15e3b6877fb7d14327a5de7099f4f76cb9b75d0f9e4179dfab87cb9 I0310 19:28:16.947619 1 recorder.go:70] Recording config/clusteroperator/ingress with fingerprint=59762b55e049f782f55026d55e967a1c4dfbf70d599019690144429607633770 I0310 19:28:16.947645 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=594aa8620142ce20d67e19499f56baa08968d004729afa9c7b1f48af7be6ca4c I0310 19:28:16.947675 1 recorder.go:70] Recording config/clusteroperator/insights with fingerprint=36dce936c56684e10ff7ba890ef506d79ef9cbf4ef5a46f8bb1e8d4859cd29d3 I0310 19:28:16.947686 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=e5ff11d57817f84a678f6fa9565af55bd1120227c16a21933637ab62675a6d70 I0310 19:28:16.947703 1 recorder.go:70] Recording config/clusteroperator/kube-apiserver with fingerprint=327905e593af13bce4f754ec0baad740a93c141c3c61b81b951296e2862b143b I0310 19:28:16.947713 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0310 19:28:16.947730 1 recorder.go:70] Recording config/clusteroperator/kube-controller-manager with fingerprint=4ebb852f1b1a20b79b884a52890f21fbc519844fac11fba1b8c374a4c960277a I0310 19:28:16.947739 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0310 19:28:16.947755 1 recorder.go:70] Recording config/clusteroperator/kube-scheduler with fingerprint=ede4d8e705047e694cf1de8b22020c8d3686a63baef42ab3e17ea933df3f897a I0310 19:28:16.947765 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0310 19:28:16.947779 1 recorder.go:70] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=63a2d34a6c3d30c9e5bea6a8c8afd1eb17668833932eba543e9c00fcce48436f I0310 19:28:16.947787 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0310 19:28:16.947803 1 recorder.go:70] Recording config/clusteroperator/monitoring with fingerprint=2d018cd9d0daea2c003b9f18ad1b1fb4e769e089474eb99702816b099367f93e I0310 19:28:16.947925 1 recorder.go:70] Recording config/clusteroperator/network with fingerprint=df311f39583f023092d63ef99ae39942ceae0927502cdf7e3fe41c6dbe1b9e2a I0310 19:28:16.947956 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0310 19:28:16.947980 1 recorder.go:70] Recording config/clusteroperator/node-tuning with fingerprint=b6f04e6dccf790a90fb543d5bc21c72f427bc595ae3e5d48ac84d474f5735188 I0310 19:28:16.948004 1 recorder.go:70] Recording config/clusteroperator/openshift-apiserver with fingerprint=0bc468e38d504f84058fb50a690232e61b2f28ca7b04c6bc54f03769667f703d I0310 19:28:16.948014 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0310 19:28:16.948029 1 recorder.go:70] Recording config/clusteroperator/openshift-controller-manager with fingerprint=4b66da8a0d32e1a9cd0f042d7e74894fcd722ce99f342f865cdc46622de76571 I0310 19:28:16.948039 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0310 19:28:16.948055 1 recorder.go:70] Recording config/clusteroperator/openshift-samples with fingerprint=79d05c6782d372e99dabd436a6404d8b997f7fcbc283c90d4b457009c5fa9aba I0310 19:28:16.948070 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=995d18cf04d90ee7b2abe7bcc9fc63f279d924ac7314037b561416b949ffa240 I0310 19:28:16.948087 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=4e7cf8c2324d8c8a28cf8ad413fbf28623515d7bc098981564a07ce591874fd3 I0310 19:28:16.948103 1 recorder.go:70] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=022efafc09374a14125a51098b61b7553fd40cc81b99c7562b925fac91e58036 I0310 19:28:16.948117 1 recorder.go:70] Recording config/clusteroperator/service-ca with fingerprint=c895155f8a5cbf877e1b29c4b8e28cf9f24176f002ea9842c2a11770f16c65f3 I0310 19:28:16.948141 1 recorder.go:70] Recording config/clusteroperator/storage with fingerprint=4f92d268442b39966a012fb3d73fccf318bb0592ce1304b8c6701aefb1f2c7c6 I0310 19:28:16.948171 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0310 19:28:16.948179 1 recorder.go:70] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0310 19:28:16.948186 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.059267826s to process 34 records I0310 19:28:17.122497 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:17.122553 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-64bbd4587d-fk6jt pod in namespace openshift-image-registry (previous: false). I0310 19:28:17.322436 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-64bbd4587d-fk6jt pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-64bbd4587d-fk6jt\" is waiting to start: ContainerCreating" I0310 19:28:17.322455 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-64bbd4587d-fk6jt\" is waiting to start: ContainerCreating" I0310 19:28:17.322506 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-64bbd4587d-skkkf pod in namespace openshift-image-registry (previous: false). I0310 19:28:17.522783 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-64bbd4587d-skkkf pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-64bbd4587d-skkkf\" is waiting to start: ContainerCreating" I0310 19:28:17.522802 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-64bbd4587d-skkkf\" is waiting to start: ContainerCreating" I0310 19:28:17.522836 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-656ddd88f7-dbckh pod in namespace openshift-image-registry (previous: false). I0310 19:28:17.723752 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-656ddd88f7-dbckh pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-656ddd88f7-dbckh\" is waiting to start: ContainerCreating" I0310 19:28:17.723769 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-656ddd88f7-dbckh\" is waiting to start: ContainerCreating" I0310 19:28:17.723779 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-54vjn pod in namespace openshift-image-registry (previous: false). W0310 19:28:17.895122 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:28:17.925258 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:17.925274 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-d46s4 pod in namespace openshift-image-registry (previous: false). I0310 19:28:18.123376 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:18.123393 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-vl274 pod in namespace openshift-image-registry (previous: false). I0310 19:28:18.323405 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0310 19:28:18.323424 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-5c456977fc-sw2gs pod in namespace openshift-ingress (previous: false). I0310 19:28:18.523457 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-5c456977fc-sw2gs pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-5c456977fc-sw2gs\" is waiting to start: ContainerCreating" I0310 19:28:18.523476 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-5c456977fc-sw2gs\" is waiting to start: ContainerCreating" I0310 19:28:18.523487 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6d8df857b5-h5ll8 pod in namespace openshift-ingress (previous: false). I0310 19:28:18.724267 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6d8df857b5-h5ll8 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6d8df857b5-h5ll8\" is waiting to start: ContainerCreating" I0310 19:28:18.724284 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6d8df857b5-h5ll8\" is waiting to start: ContainerCreating" I0310 19:28:18.724295 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-6d8df857b5-pjbns pod in namespace openshift-ingress (previous: false). W0310 19:28:18.894763 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0310 19:28:18.924266 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-6d8df857b5-pjbns pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-6d8df857b5-pjbns\" is waiting to start: ContainerCreating" I0310 19:28:18.924282 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-6d8df857b5-pjbns\" is waiting to start: ContainerCreating" I0310 19:28:18.924293 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-8cftp pod in namespace openshift-ingress-canary (previous: false). I0310 19:28:19.127395 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-8cftp pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-8cftp\" is waiting to start: ContainerCreating" I0310 19:28:19.127411 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-8cftp\" is waiting to start: ContainerCreating" I0310 19:28:19.127422 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-s9b57 pod in namespace openshift-ingress-canary (previous: false). I0310 19:28:19.323205 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-s9b57 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-s9b57\" is waiting to start: ContainerCreating" I0310 19:28:19.323224 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-s9b57\" is waiting to start: ContainerCreating" I0310 19:28:19.323234 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-xmvsx pod in namespace openshift-ingress-canary (previous: false). I0310 19:28:19.523077 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-xmvsx pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-xmvsx\" is waiting to start: ContainerCreating" I0310 19:28:19.523094 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-xmvsx\" is waiting to start: ContainerCreating" I0310 19:28:19.523109 1 tasks_processing.go:74] worker 22 stopped. I0310 19:28:19.523238 1 recorder.go:70] Recording events/openshift-dns with fingerprint=7a2b63a203a184b9c6425a618e3d85deece6888751c5f29b56b590edb77502d1 I0310 19:28:19.523327 1 recorder.go:70] Recording events/openshift-image-registry with fingerprint=1c704137ca4312847bef27814955d81bdab3b251c32235e5fa3fc037f4ec0d46 I0310 19:28:19.523356 1 recorder.go:70] Recording events/openshift-ingress-operator with fingerprint=a1a80b28179185ac5fb35a74ad096229676ac76c9e1aa16fb82c1785ec09a71e I0310 19:28:19.523403 1 recorder.go:70] Recording events/openshift-ingress with fingerprint=93299850c88955253b2a022938d3bd60640bf6edf73c68d8127bc7e086378a19 I0310 19:28:19.523421 1 recorder.go:70] Recording events/openshift-ingress-canary with fingerprint=2f5e3776bfa1657c0d5565adfa9a5890fcf4257be64a29baaed9bcfac6f374e7 I0310 19:28:19.523579 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-64bbd4587d-fk6jt with fingerprint=a04814ed78dc8fc34a87ccdd2b4b5f17c6b8992720fce6d8f16153cee0c9cbf7 I0310 19:28:19.523687 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-64bbd4587d-skkkf with fingerprint=c32400a1029eedf34ff333c090daa6711f028ef2075d7c6c929f0a04187ff12a I0310 19:28:19.523777 1 recorder.go:70] Recording config/pod/openshift-image-registry/image-registry-656ddd88f7-dbckh with fingerprint=fb6ed2bca82d1c3045b2fd497d2e5d2dae97d3d6231718421e4fe64a9ddcd8ce I0310 19:28:19.523787 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.632984725s to process 8 records W0310 19:28:19.895191 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0310 19:28:19.895225 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0310 19:28:19.895240 1 tasks_processing.go:74] worker 9 stopped. E0310 19:28:19.895250 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0310 19:28:19.895261 1 recorder.go:70] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0310 19:28:19.895273 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0310 19:28:19.895283 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.029517969s to process 1 records I0310 19:28:27.161093 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:28:27.306902 1 tasks_processing.go:74] worker 11 stopped. I0310 19:28:27.306941 1 recorder.go:70] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0310 19:28:27.306954 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.44288965s to process 1 records I0310 19:28:28.075415 1 tasks_processing.go:74] worker 10 stopped. I0310 19:28:28.075718 1 recorder.go:70] Recording config/serviceaccounts with fingerprint=29ecf786211bc49b69ab0dc7a4cfc0a4d8b0e3c5cbe08dd4c30acd97b260957f I0310 19:28:28.075737 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.210460969s to process 1 records E0310 19:28:28.075801 1 periodic.go:250] "Unhandled Error" err="clusterconfig failed after 13.213s with: function \"machine_healthchecks\" failed with an error, function \"machines\" failed with an error, function \"support_secret\" failed with an error, function \"machine_configs\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0310 19:28:28.076913 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "machines" failed with an error, function "support_secret" failed with an error, function "machine_configs" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0310 19:28:28.076929 1 periodic.go:212] Running workloads gatherer I0310 19:28:28.076945 1 tasks_processing.go:45] number of workers: 2 I0310 19:28:28.076959 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:28:28.076965 1 tasks_processing.go:71] worker 1 working on workload_info task. I0310 19:28:28.076973 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:28:28.077051 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0310 19:28:28.102121 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 21s for image data I0310 19:28:28.103765 1 tasks_processing.go:74] worker 0 stopped. I0310 19:28:28.103780 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 26.692412ms to process 0 records I0310 19:28:28.112538 1 gather_workloads_info.go:387] No image sha256:bae7a33f8db8a3d4b3c4c05498aba85a0ce463f85322067c48e79663710e616e (11ms) I0310 19:28:28.123596 1 gather_workloads_info.go:387] No image sha256:a7c71f3c9714cf1717c45e12ec817be6d6209f00989e6a31d634e527f0c8147d (11ms) I0310 19:28:28.134256 1 gather_workloads_info.go:387] No image sha256:2d077e73dd76873ee2a1583aa3b3e76a1b408737f0af2e22b2aa055604d89e81 (11ms) I0310 19:28:28.147036 1 gather_workloads_info.go:387] No image sha256:87efe06afd1f04426fca7f86c0f74c4ee75c311ba199ddabd5b849b877bc59fa (13ms) I0310 19:28:28.157734 1 gather_workloads_info.go:387] No image sha256:7941b5d9a758b8667e99cd9236f7f96ec036af61df5cbc96e16077d36700d7c7 (11ms) I0310 19:28:28.168778 1 gather_workloads_info.go:387] No image sha256:8ef4bdf07dda423fab73484dbc66d527ce8ba8cc2d4f99210bc1bc24ea08c0cb (11ms) I0310 19:28:28.180253 1 gather_workloads_info.go:387] No image sha256:824a3b8f78e19aa21a9f6444aefe8e0b624886cca18fe828b13acbab55e6e868 (11ms) I0310 19:28:28.190950 1 gather_workloads_info.go:387] No image sha256:373614619b9420b110d1508c5f17e066ce69c4c226fd04d02fbb959d9ba41eb6 (11ms) I0310 19:28:28.201660 1 gather_workloads_info.go:387] No image sha256:48404bd61d05dc738ccce4d22e36b30dbcdc6015b06b4afb604fd1baeee35bf2 (11ms) I0310 19:28:28.212564 1 gather_workloads_info.go:387] No image sha256:9697bc2258bdfa9ae8c1866cc7eb0b3b46851998db827b121bdf77417a881eb3 (11ms) I0310 19:28:28.223055 1 gather_workloads_info.go:387] No image sha256:420c12f61c53e54eab2d99476759c0de339fca98ce0a3a782bc6545cc0e97a9c (10ms) I0310 19:28:28.314140 1 gather_workloads_info.go:387] No image sha256:944d9261ba7a143131fe8267c172defc5f37acc1cea3d4d373ec6fc5d8bfcc31 (91ms) I0310 19:28:28.413707 1 gather_workloads_info.go:387] No image sha256:6d6ebff54b8adac74f4d1b12ac8aa0f16cd7b28370e0d6aa847d1c457e03a5b6 (100ms) I0310 19:28:28.513926 1 gather_workloads_info.go:387] No image sha256:4f175ee49f51dc4379e9993fecd1657c7a9c4c3dc096772cb198d0212b9eea47 (100ms) I0310 19:28:28.613960 1 gather_workloads_info.go:387] No image sha256:c69559260f8b618abc6561da7b0327a2da59e6c09d27f249a14c4b2733ed0384 (100ms) I0310 19:28:28.713908 1 gather_workloads_info.go:387] No image sha256:d28734effaadd66434e77a1a5fbe2e8a4ca2066cd9f8868c22ade9475539bfd7 (100ms) I0310 19:28:28.814044 1 gather_workloads_info.go:387] No image sha256:23c4fe84047a4e6cbe7f75f470e7d6fe0e61e7910dd17945b1b61bc4b72f3f2a (100ms) I0310 19:28:28.913741 1 gather_workloads_info.go:387] No image sha256:2fac754deaeade3456361eed52e344318ff16d04819384432759f0ea35530114 (100ms) I0310 19:28:29.013407 1 gather_workloads_info.go:387] No image sha256:752be69b2262be713df12c47f4bac8c2dafed272c401e9a89f8060f053d68054 (100ms) I0310 19:28:29.113347 1 gather_workloads_info.go:387] No image sha256:76f820bd9bd138d29305d545d7d49bfe63e2923f1fb1ec2e8eac81a388359024 (100ms) I0310 19:28:29.213555 1 gather_workloads_info.go:387] No image sha256:3d0634fe58641d5242649c44ddba70ed67fdd6d2dcb5c2261df5cee8b33de9fd (100ms) I0310 19:28:29.313594 1 gather_workloads_info.go:387] No image sha256:6e67c980b7300e769fb4b2adaaf006d0f8274e43b10586701b204ea5153f15fc (100ms) I0310 19:28:29.413718 1 gather_workloads_info.go:387] No image sha256:14703f73bb1ca69ea03b726e340b3b68f8e294e1f22c80d1a16666ea3d4a88a3 (100ms) I0310 19:28:29.513913 1 gather_workloads_info.go:387] No image sha256:0a8a0473029ace3adbb66f490dbe560f0e7782a38cdf32c1f7dd3e092e1d191e (100ms) I0310 19:28:29.614047 1 gather_workloads_info.go:387] No image sha256:f2c46054a8f64a0e949cbf30295f31b4a35a0203c6ad03fa7ec922b4101dcbf8 (100ms) I0310 19:28:29.714761 1 gather_workloads_info.go:387] No image sha256:0a92dd43975f972e3dc707fb37854e046813531553038147e9c90d54a8d9df73 (101ms) I0310 19:28:29.813926 1 gather_workloads_info.go:387] No image sha256:ceab8a8c340801fcebf800620d3fef5493d8a443077a21c6e56984051bd3abde (99ms) I0310 19:28:29.913675 1 gather_workloads_info.go:387] No image sha256:8b1e32ce43eb1247b0558230b6e0baa85ac62f1a5e2a2089d40b4fea90538529 (100ms) I0310 19:28:30.013877 1 gather_workloads_info.go:387] No image sha256:18956d5fbc8d0369a26cb1e00eb5c3aff1973ae8cbd3983920a0cf2851f05358 (100ms) I0310 19:28:30.114081 1 gather_workloads_info.go:387] No image sha256:ccdda82b4adfa8ee4a84b3bad19693be6a42a9881a7b83788d835912129c49a4 (100ms) I0310 19:28:30.180345 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:28:30.212763 1 gather_workloads_info.go:387] No image sha256:a9e885d6f0456a2cc10f9e5da71fe9403f5e4c639b9b7ea15bd403b272ccb824 (99ms) I0310 19:28:30.212791 1 tasks_processing.go:74] worker 1 stopped. E0310 19:28:30.212810 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0310 19:28:30.213095 1 recorder.go:70] Recording config/workload_info with fingerprint=82c5145d955ed6c45aa364a49309db19d60fb7d8d04062048bc06fc44e98e2e1 I0310 19:28:30.213115 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.135818211s to process 1 records E0310 19:28:30.213171 1 periodic.go:250] "Unhandled Error" err="workloads failed after 2.136s with: function \"workload_info\" failed with an error" I0310 19:28:30.214268 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0310 19:28:30.214279 1 periodic.go:212] Running conditional gatherer I0310 19:28:30.222725 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules I0310 19:28:30.229603 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.9:48205->172.30.0.10:53: read: connection refused E0310 19:28:30.229880 1 conditional_gatherer.go:320] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0310 19:28:30.229947 1 conditional_gatherer.go:382] updating version cache for conditional gatherer I0310 19:28:30.239306 1 conditional_gatherer.go:390] cluster version is '4.19.9' E0310 19:28:30.239319 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239324 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239327 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239331 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239334 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239337 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239340 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239344 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing E0310 19:28:30.239346 1 conditional_gatherer.go:207] error checking conditions for a gathering rule: alerts cache is missing I0310 19:28:30.239362 1 tasks_processing.go:45] number of workers: 3 I0310 19:28:30.239376 1 tasks_processing.go:69] worker 2 listening for tasks. I0310 19:28:30.239382 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0310 19:28:30.239383 1 tasks_processing.go:69] worker 0 listening for tasks. I0310 19:28:30.239391 1 tasks_processing.go:69] worker 1 listening for tasks. I0310 19:28:30.239403 1 tasks_processing.go:74] worker 1 stopped. I0310 19:28:30.239414 1 tasks_processing.go:71] worker 0 working on remote_configuration task. I0310 19:28:30.239414 1 tasks_processing.go:71] worker 2 working on rapid_container_logs task. I0310 19:28:30.239482 1 recorder.go:70] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0310 19:28:30.239495 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 1.309µs to process 1 records I0310 19:28:30.239528 1 recorder.go:70] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0310 19:28:30.239536 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.373µs to process 1 records I0310 19:28:30.239541 1 tasks_processing.go:74] worker 0 stopped. I0310 19:28:30.239687 1 tasks_processing.go:74] worker 2 stopped. I0310 19:28:30.239699 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 259.379µs to process 0 records I0310 19:28:30.239719 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.19.9/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.9:48205->172.30.0.10:53: read: connection refused I0310 19:28:30.239736 1 recorder.go:70] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0310 19:28:30.266586 1 recorder.go:70] Recording insights-operator/gathers with fingerprint=469e220a307c06108c4c79cc17053350a161affc5df7d4b6f360738e130c2988 I0310 19:28:30.266700 1 diskrecorder.go:70] Writing 100 records to /var/lib/insights-operator/insights-2026-03-10-192830.tar.gz I0310 19:28:30.273366 1 diskrecorder.go:51] Wrote 100 records to disk in 6ms I0310 19:28:30.273393 1 periodic.go:281] Gathering cluster info every 2h0m0s I0310 19:28:30.273408 1 periodic.go:282] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0310 19:28:30.384343 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:28:38.847199 1 configmapobserver.go:84] configmaps "insights-config" not found I0310 19:29:39.497501 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="9bea3feb0e90ace51efdb948b4f196da41b8a4aa27df6855e45a8b84e08db752") W0310 19:29:39.497534 1 builder.go:160] Restart triggered because of file /var/run/configmaps/service-ca-bundle/service-ca.crt was created I0310 19:29:39.497584 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="80754afb9eb3e40b01df9cc0c9482c47c8d7d01b7f38db02e0b76e3224abbfdf") I0310 19:29:39.497587 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector I0310 19:29:39.497629 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="643702f930e82f68143a692d6ae2a02d30002ddeda1077df5003a5ce3e42aa80") I0310 19:29:39.497664 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0310 19:29:39.497675 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0310 19:29:39.497697 1 base_controller.go:181] Shutting down LoggingSyncer ... I0310 19:29:39.497701 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0310 19:29:39.497720 1 genericapiserver.go:651] "[graceful-termination] not going to wait for active watch request(s) to drain" I0310 19:29:39.497764 1 base_controller.go:181] Shutting down ConfigController ... I0310 19:29:39.497779 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0310 19:29:39.497786 1 base_controller.go:113] All ConfigController workers have been terminated I0310 19:29:39.497792 1 secure_serving.go:258] Stopped listening on [::]:8443 I0310 19:29:39.497807 1 genericapiserver.go:600] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" I0310 19:29:39.497881 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController I0310 19:29:39.497817 1 periodic.go:173] Shutting down I0310 19:29:39.497826 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0310 19:29:39.497962 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0310 19:29:39.497827 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController" I0310 19:29:39.497852 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"