W0319 14:00:07.546876 1 cmd.go:257] Using insecure, self-signed certificates I0319 14:00:07.947576 1 start.go:138] Unable to read service ca bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:07.947927 1 observer_polling.go:159] Starting file observer I0319 14:00:08.353787 1 operator.go:60] Starting insights-operator v0.0.0-master+$Format:%H$ I0319 14:00:08.353982 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0319 14:00:08.354680 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0319 14:00:08.355037 1 secure_serving.go:57] Forcing use of http/1.1 only W0319 14:00:08.355062 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0319 14:00:08.355068 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0319 14:00:08.355074 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0319 14:00:08.355078 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0319 14:00:08.355082 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0319 14:00:08.355086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0319 14:00:08.359009 1 operator.go:125] FeatureGates initialized: knownFeatureGates=[AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CPMSMachineNamePrefix ConsolePluginContentSecurityPolicy ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageVolume IngressControllerLBSubnetsAWS KMSv1 MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MetricsCollectionProfiles NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation PinnedImages ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SigstoreImageVerification StoragePerformantSecurityPolicy UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereMultiDisk VSphereMultiNetworks AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk BootImageSkewEnforcement BootcNodeManagement ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall ImageModeStatusReporting ImageStreamImportMode IngressControllerDynamicConfigurationManager InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PreconfiguredUDNAddresses SELinuxMount ShortCertRotation SignatureStores SigstoreImageVerificationPKI TranslateStreamCloseWebsocketRequests VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VolumeAttributesClass VolumeGroupSnapshot] I0319 14:00:08.359052 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"1adc1221-c25b-4bd5-b4f1-330bbfe8caf2", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ConsolePluginContentSecurityPolicy", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageVolume", "IngressControllerLBSubnetsAWS", "KMSv1", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "MetricsCollectionProfiles", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "PinnedImages", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SigstoreImageVerification", "StoragePerformantSecurityPolicy", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiDisk", "VSphereMultiNetworks"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "BootImageSkewEnforcement", "BootcNodeManagement", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "Example2", "ExternalSnapshotMetadata", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "ImageModeStatusReporting", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "MultiDiskSetup", "MutatingAdmissionPolicy", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PreconfiguredUDNAddresses", "SELinuxMount", "ShortCertRotation", "SignatureStores", "SigstoreImageVerificationPKI", "TranslateStreamCloseWebsocketRequests", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMixedNodeEnv", "VolumeAttributesClass", "VolumeGroupSnapshot"}} I0319 14:00:08.359274 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController I0319 14:00:08.359294 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController I0319 14:00:08.359336 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0319 14:00:08.359356 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0319 14:00:08.359453 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0319 14:00:08.359396 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0319 14:00:08.359698 1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/tmp/serving-cert-236620080/tls.crt::/tmp/serving-cert-236620080/tls.key" I0319 14:00:08.360099 1 secure_serving.go:213] Serving securely on [::]:8443 I0319 14:00:08.360131 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0319 14:00:08.365105 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0319 14:00:08.365136 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0319 14:00:08.365254 1 base_controller.go:76] Waiting for caches to sync for ConfigController I0319 14:00:08.371491 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0319 14:00:08.371566 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0319 14:00:08.377865 1 secretconfigobserver.go:119] support secret does not exist I0319 14:00:08.385061 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0319 14:00:08.390898 1 secretconfigobserver.go:119] support secret does not exist I0319 14:00:08.392544 1 recorder.go:161] Pruning old reports every 5h25m10s, max age is 288h0m0s I0319 14:00:08.400222 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0319 14:00:08.400252 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0319 14:00:08.400262 1 periodic.go:209] Running clusterconfig gatherer I0319 14:00:08.400270 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0319 14:00:08.400281 1 insightsreport.go:296] Starting report retriever I0319 14:00:08.400287 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0319 14:00:08.400313 1 tasks_processing.go:45] number of workers: 32 I0319 14:00:08.400341 1 tasks_processing.go:69] worker 2 listening for tasks. I0319 14:00:08.400349 1 tasks_processing.go:69] worker 0 listening for tasks. I0319 14:00:08.400353 1 tasks_processing.go:69] worker 31 listening for tasks. I0319 14:00:08.400358 1 tasks_processing.go:71] worker 0 working on metrics task. I0319 14:00:08.400359 1 tasks_processing.go:69] worker 16 listening for tasks. I0319 14:00:08.400363 1 tasks_processing.go:69] worker 17 listening for tasks. I0319 14:00:08.400362 1 tasks_processing.go:69] worker 1 listening for tasks. I0319 14:00:08.400350 1 tasks_processing.go:71] worker 2 working on ingress_certificates task. I0319 14:00:08.400377 1 tasks_processing.go:69] worker 9 listening for tasks. I0319 14:00:08.400377 1 tasks_processing.go:69] worker 19 listening for tasks. I0319 14:00:08.400385 1 tasks_processing.go:69] worker 14 listening for tasks. I0319 14:00:08.400371 1 tasks_processing.go:69] worker 18 listening for tasks. I0319 14:00:08.400378 1 tasks_processing.go:69] worker 10 listening for tasks. I0319 14:00:08.400393 1 tasks_processing.go:69] worker 4 listening for tasks. I0319 14:00:08.400385 1 tasks_processing.go:69] worker 20 listening for tasks. I0319 14:00:08.400397 1 tasks_processing.go:69] worker 13 listening for tasks. I0319 14:00:08.400419 1 tasks_processing.go:69] worker 7 listening for tasks. I0319 14:00:08.400392 1 tasks_processing.go:69] worker 21 listening for tasks. I0319 14:00:08.400425 1 tasks_processing.go:71] worker 9 working on image_registries task. I0319 14:00:08.400431 1 tasks_processing.go:71] worker 18 working on node_logs task. I0319 14:00:08.400432 1 tasks_processing.go:71] worker 19 working on storage_cluster task. I0319 14:00:08.400433 1 tasks_processing.go:71] worker 21 working on olm_operators task. I0319 14:00:08.400444 1 tasks_processing.go:69] worker 28 listening for tasks. I0319 14:00:08.400449 1 tasks_processing.go:69] worker 12 listening for tasks. I0319 14:00:08.400456 1 tasks_processing.go:71] worker 12 working on openstack_dataplanenodesets task. I0319 14:00:08.400458 1 tasks_processing.go:69] worker 29 listening for tasks. I0319 14:00:08.400468 1 tasks_processing.go:69] worker 30 listening for tasks. I0319 14:00:08.400434 1 tasks_processing.go:69] worker 27 listening for tasks. I0319 14:00:08.400399 1 tasks_processing.go:69] worker 22 listening for tasks. I0319 14:00:08.400400 1 tasks_processing.go:69] worker 15 listening for tasks. I0319 14:00:08.400405 1 tasks_processing.go:71] worker 20 working on silenced_alerts task. W0319 14:00:08.400502 1 gather_silenced_alerts.go:38] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:08.400559 1 gather.go:177] gatherer "clusterconfig" function "silenced_alerts" took 68.987µs to process 0 records I0319 14:00:08.400451 1 tasks_processing.go:71] worker 28 working on machine_healthchecks task. W0319 14:00:08.400391 1 gather_most_recent_metrics.go:64] Unable to load metrics client, no metrics will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:08.400663 1 gather.go:177] gatherer "clusterconfig" function "metrics" took 293.243µs to process 0 records I0319 14:00:08.400518 1 tasks_processing.go:71] worker 7 working on networks task. I0319 14:00:08.400713 1 tasks_processing.go:71] worker 15 working on image_pruners task. I0319 14:00:08.400813 1 tasks_processing.go:71] worker 20 working on certificate_signing_requests task. I0319 14:00:08.400408 1 tasks_processing.go:69] worker 5 listening for tasks. I0319 14:00:08.400370 1 tasks_processing.go:69] worker 3 listening for tasks. I0319 14:00:08.400904 1 tasks_processing.go:71] worker 5 working on feature_gates task. I0319 14:00:08.400409 1 tasks_processing.go:69] worker 11 listening for tasks. I0319 14:00:08.400414 1 tasks_processing.go:69] worker 24 listening for tasks. I0319 14:00:08.400415 1 tasks_processing.go:71] worker 16 working on container_images task. I0319 14:00:08.400415 1 tasks_processing.go:69] worker 6 listening for tasks. I0319 14:00:08.400950 1 tasks_processing.go:71] worker 24 working on mutating_webhook_configurations task. I0319 14:00:08.400954 1 tasks_processing.go:71] worker 6 working on infrastructures task. I0319 14:00:08.401048 1 tasks_processing.go:71] worker 3 working on machine_config_pools task. I0319 14:00:08.400419 1 tasks_processing.go:71] worker 17 working on authentication task. I0319 14:00:08.401117 1 tasks_processing.go:71] worker 11 working on install_plans task. I0319 14:00:08.400561 1 tasks_processing.go:71] worker 14 working on ceph_cluster task. I0319 14:00:08.400422 1 tasks_processing.go:71] worker 1 working on version task. I0319 14:00:08.400424 1 tasks_processing.go:69] worker 8 listening for tasks. I0319 14:00:08.401487 1 tasks_processing.go:71] worker 8 working on lokistack task. I0319 14:00:08.400426 1 tasks_processing.go:69] worker 26 listening for tasks. I0319 14:00:08.400426 1 tasks_processing.go:71] worker 13 working on sap_pods task. I0319 14:00:08.400532 1 tasks_processing.go:71] worker 10 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0319 14:00:08.400540 1 tasks_processing.go:71] worker 4 working on schedulers task. I0319 14:00:08.401621 1 tasks_processing.go:71] worker 26 working on openstack_controlplanes task. I0319 14:00:08.400406 1 tasks_processing.go:69] worker 23 listening for tasks. I0319 14:00:08.400691 1 tasks_processing.go:71] worker 0 working on pod_network_connectivity_checks task. I0319 14:00:08.400697 1 tasks_processing.go:71] worker 29 working on cost_management_metrics_configs task. I0319 14:00:08.400701 1 tasks_processing.go:71] worker 30 working on nodenetworkstates task. I0319 14:00:08.400705 1 tasks_processing.go:71] worker 27 working on pdbs task. I0319 14:00:08.400709 1 tasks_processing.go:71] worker 22 working on openshift_logging task. I0319 14:00:08.400410 1 tasks_processing.go:71] worker 31 working on openstack_version task. I0319 14:00:08.400420 1 tasks_processing.go:69] worker 25 listening for tasks. I0319 14:00:08.402411 1 tasks_processing.go:71] worker 25 working on machine_autoscalers task. I0319 14:00:08.401781 1 tasks_processing.go:71] worker 23 working on validating_webhook_configurations task. I0319 14:00:08.406061 1 tasks_processing.go:71] worker 19 working on ingress task. I0319 14:00:08.406071 1 gather.go:177] gatherer "clusterconfig" function "storage_cluster" took 5.615161ms to process 0 records I0319 14:00:08.407132 1 controller.go:128] Initializing last reported time to 0001-01-01T00:00:00Z I0319 14:00:08.407152 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0319 14:00:08.407158 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0319 14:00:08.407162 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0319 14:00:08.407179 1 controller.go:489] The operator is still being initialized I0319 14:00:08.407189 1 controller.go:512] The operator is healthy I0319 14:00:08.409104 1 tasks_processing.go:71] worker 14 working on operators task. I0319 14:00:08.409114 1 gather.go:177] gatherer "clusterconfig" function "ceph_cluster" took 7.878245ms to process 0 records I0319 14:00:08.410183 1 tasks_processing.go:71] worker 18 working on sap_config task. I0319 14:00:08.410226 1 gather.go:177] gatherer "clusterconfig" function "node_logs" took 9.743012ms to process 0 records I0319 14:00:08.410460 1 tasks_processing.go:71] worker 7 working on machine_configs task. I0319 14:00:08.410706 1 recorder.go:75] Recording config/network with fingerprint=59f6d003abb90b0542cd9b2084508d4ca542e392e0acd1b53d4a065ab4c241cc I0319 14:00:08.410724 1 gather.go:177] gatherer "clusterconfig" function "networks" took 9.733855ms to process 1 records I0319 14:00:08.411678 1 tasks_processing.go:71] worker 28 working on sap_datahubs task. E0319 14:00:08.411693 1 gather.go:140] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0319 14:00:08.411708 1 gather.go:177] gatherer "clusterconfig" function "machine_healthchecks" took 11.075294ms to process 0 records I0319 14:00:08.413476 1 tasks_processing.go:71] worker 12 working on cluster_apiserver task. I0319 14:00:08.413478 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 13.011197ms to process 0 records I0319 14:00:08.413581 1 gather.go:177] gatherer "clusterconfig" function "cost_management_metrics_configs" took 11.519291ms to process 0 records I0319 14:00:08.413589 1 gather.go:177] gatherer "clusterconfig" function "lokistack" took 11.959336ms to process 0 records I0319 14:00:08.413600 1 tasks_processing.go:71] worker 8 working on clusterroles task. I0319 14:00:08.413711 1 tasks_processing.go:71] worker 29 working on openshift_machine_api_events task. I0319 14:00:08.413744 1 tasks_processing.go:71] worker 19 working on crds task. I0319 14:00:08.413976 1 recorder.go:75] Recording config/ingress with fingerprint=2a8153bdcfdc5cd05849a37bf1f04cce8beb1f98c19ae66cb26337755382a2fd I0319 14:00:08.413990 1 gather.go:177] gatherer "clusterconfig" function "ingress" took 7.669132ms to process 1 records I0319 14:00:08.414236 1 tasks_processing.go:71] worker 15 working on jaegers task. I0319 14:00:08.416036 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=d8c64a007ece0b84931fa85ef35a816796421651f9f4cd8ff3d53c3db89e7b2d I0319 14:00:08.416266 1 gather.go:177] gatherer "clusterconfig" function "image_pruners" took 13.08871ms to process 1 records I0319 14:00:08.416392 1 tasks_processing.go:71] worker 22 working on monitoring_persistent_volumes task. I0319 14:00:08.416422 1 gather.go:177] gatherer "clusterconfig" function "openshift_logging" took 14.175841ms to process 0 records I0319 14:00:08.419569 1 tasks_processing.go:71] worker 18 working on container_runtime_configs task. I0319 14:00:08.419578 1 gather.go:177] gatherer "clusterconfig" function "sap_config" took 9.36737ms to process 0 records I0319 14:00:08.419842 1 tasks_processing.go:71] worker 28 working on tsdb_status task. I0319 14:00:08.419852 1 gather.go:177] gatherer "clusterconfig" function "sap_datahubs" took 8.14624ms to process 0 records W0319 14:00:08.419870 1 gather_prometheus_tsdb_status.go:38] Unable to load metrics client, tsdb status cannot be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:08.419880 1 tasks_processing.go:71] worker 28 working on machine_sets task. I0319 14:00:08.419924 1 gather.go:177] gatherer "clusterconfig" function "tsdb_status" took 27.939µs to process 0 records I0319 14:00:08.421366 1 tasks_processing.go:71] worker 26 working on support_secret task. I0319 14:00:08.421371 1 gather.go:177] gatherer "clusterconfig" function "openstack_controlplanes" took 19.722991ms to process 0 records I0319 14:00:08.421809 1 tasks_processing.go:71] worker 31 working on image task. I0319 14:00:08.421817 1 gather.go:177] gatherer "clusterconfig" function "openstack_version" took 19.506225ms to process 0 records I0319 14:00:08.422070 1 tasks_processing.go:71] worker 0 working on nodes task. E0319 14:00:08.422075 1 gather.go:140] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0319 14:00:08.422087 1 gather.go:177] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 20.190809ms to process 0 records I0319 14:00:08.422106 1 gather.go:177] gatherer "clusterconfig" function "machine_autoscalers" took 19.614032ms to process 0 records I0319 14:00:08.422116 1 tasks_processing.go:71] worker 25 working on qemu_kubevirt_launcher_logs task. I0319 14:00:08.422394 1 tasks_processing.go:71] worker 13 working on config_maps task. I0319 14:00:08.422439 1 gather.go:177] gatherer "clusterconfig" function "sap_pods" took 20.810754ms to process 0 records I0319 14:00:08.422649 1 tasks_processing.go:71] worker 30 working on nodenetworkconfigurationpolicies task. I0319 14:00:08.422664 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkstates" took 20.580836ms to process 0 records I0319 14:00:08.423134 1 tasks_processing.go:71] worker 9 working on openstack_dataplanedeployments task. I0319 14:00:08.423494 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=78ce31646973f59c4f8c1c696733ba30b7f548023ed6f2e40ed5cb1335e7104f I0319 14:00:08.423512 1 gather.go:177] gatherer "clusterconfig" function "image_registries" took 22.692749ms to process 1 records I0319 14:00:08.426137 1 tasks_processing.go:71] worker 15 working on proxies task. I0319 14:00:08.426157 1 gather.go:177] gatherer "clusterconfig" function "jaegers" took 11.820901ms to process 0 records I0319 14:00:08.426398 1 tasks_processing.go:71] worker 21 working on aggregated_monitoring_cr_names task. I0319 14:00:08.426415 1 gather.go:177] gatherer "clusterconfig" function "olm_operators" took 25.94852ms to process 0 records I0319 14:00:08.426531 1 tasks_processing.go:71] worker 6 working on active_alerts task. W0319 14:00:08.426562 1 gather_active_alerts.go:54] Unable to load alerts client, no alerts will be collected: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:08.426903 1 recorder.go:75] Recording config/infrastructure with fingerprint=72fde780956d5bc17dbffe77083a97d4f6a3d18d510af1061b785a59f6bc1a54 I0319 14:00:08.426919 1 gather.go:177] gatherer "clusterconfig" function "infrastructures" took 25.437459ms to process 1 records I0319 14:00:08.426935 1 gather.go:177] gatherer "clusterconfig" function "machine_config_pools" took 25.422049ms to process 0 records I0319 14:00:08.426943 1 gather.go:177] gatherer "clusterconfig" function "active_alerts" took 28.887µs to process 0 records I0319 14:00:08.426952 1 tasks_processing.go:71] worker 6 working on machines task. I0319 14:00:08.427070 1 tasks_processing.go:71] worker 3 working on operators_pods_and_events task. I0319 14:00:08.429326 1 tasks_processing.go:71] worker 28 working on oauths task. I0319 14:00:08.429338 1 gather.go:177] gatherer "clusterconfig" function "machine_sets" took 9.435553ms to process 0 records I0319 14:00:08.429754 1 tasks_processing.go:71] worker 5 working on storage_classes task. I0319 14:00:08.429852 1 recorder.go:75] Recording config/featuregate with fingerprint=3b690ba30516952f78f26ee9921ea87fb92ceeef1ece058584a719443bf2fce5 I0319 14:00:08.429866 1 gather.go:177] gatherer "clusterconfig" function "feature_gates" took 28.837382ms to process 1 records I0319 14:00:08.430081 1 tasks_processing.go:71] worker 30 working on overlapping_namespace_uids task. I0319 14:00:08.430087 1 gather.go:177] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 7.418738ms to process 0 records I0319 14:00:08.430099 1 gather.go:177] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 6.945058ms to process 0 records I0319 14:00:08.430148 1 tasks_processing.go:71] worker 9 working on service_accounts task. I0319 14:00:08.430502 1 tasks_processing.go:71] worker 18 working on dvo_metrics task. I0319 14:00:08.430549 1 gather.go:177] gatherer "clusterconfig" function "container_runtime_configs" took 10.918754ms to process 0 records I0319 14:00:08.430817 1 tasks_processing.go:74] worker 12 stopped. I0319 14:00:08.431028 1 recorder.go:75] Recording config/apiserver with fingerprint=2e4741691ae6bcba149bf61bbdd43bb7d11db8d837c64dfbd7501057368cc21a I0319 14:00:08.431103 1 gather.go:177] gatherer "clusterconfig" function "cluster_apiserver" took 17.298325ms to process 1 records I0319 14:00:08.431336 1 tasks_processing.go:74] worker 17 stopped. I0319 14:00:08.431563 1 recorder.go:75] Recording config/authentication with fingerprint=4cc404f8f14028a9204878dedb96c4c4d0e1bb3c5359312f953e1b87cc61e13b I0319 14:00:08.431581 1 gather.go:177] gatherer "clusterconfig" function "authentication" took 30.253832ms to process 1 records I0319 14:00:08.432210 1 tasks_processing.go:74] worker 27 stopped. I0319 14:00:08.432336 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=faff2a2a63e26737db84ea27c628fc98d05ccf16141d49e8accc313619bc6403 I0319 14:00:08.432371 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=fbc9830f0a039eabab4c5485d3da0a7ac75702a7852c19d4c0f7da778e8ffe2d I0319 14:00:08.432402 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=4e0cb1a1e4159644731e6bf479fcb86c673e3e8360c0b190eaf369d95da1c437 I0319 14:00:08.432417 1 gather.go:177] gatherer "clusterconfig" function "pdbs" took 30.022316ms to process 3 records I0319 14:00:08.432501 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=b89e78ec8cf75930049410c746f115d0bf97db039e0e3a40ea83e9a2889362fe I0319 14:00:08.432518 1 gather.go:177] gatherer "clusterconfig" function "schedulers" took 30.738689ms to process 1 records I0319 14:00:08.432501 1 tasks_processing.go:74] worker 4 stopped. I0319 14:00:08.432762 1 tasks_processing.go:74] worker 29 stopped. I0319 14:00:08.432777 1 gather.go:177] gatherer "clusterconfig" function "openshift_machine_api_events" took 19.034683ms to process 0 records I0319 14:00:08.432811 1 tasks_processing.go:74] worker 24 stopped. I0319 14:00:08.432985 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=e84c663363226e3c115436d60f6e9a19387c828683f34ac6a3982e4a9783d246 I0319 14:00:08.433002 1 gather.go:177] gatherer "clusterconfig" function "mutating_webhook_configurations" took 31.84643ms to process 1 records I0319 14:00:08.433089 1 tasks_processing.go:74] worker 23 stopped. I0319 14:00:08.433107 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=fcc6fdb8662eb154695cec9cd36e640aef45e250ab6a34f5a31761182aa77e9b I0319 14:00:08.433192 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=3e27f995ee93783e518dd88633abee2f5b0cefa89ce057edafa59452d017c694 I0319 14:00:08.433227 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=835e9b3d951998312b92be1316427783f384c33cdfafc4f8a13db3cb2403c47c I0319 14:00:08.433256 1 gather.go:177] gatherer "clusterconfig" function "validating_webhook_configurations" took 30.334459ms to process 3 records I0319 14:00:08.433840 1 tasks_processing.go:74] worker 26 stopped. E0319 14:00:08.433856 1 gather.go:140] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0319 14:00:08.433867 1 gather.go:177] gatherer "clusterconfig" function "support_secret" took 12.454295ms to process 0 records I0319 14:00:08.434888 1 tasks_processing.go:74] worker 20 stopped. I0319 14:00:08.434901 1 gather.go:177] gatherer "clusterconfig" function "certificate_signing_requests" took 34.013072ms to process 0 records I0319 14:00:08.435704 1 tasks_processing.go:74] worker 31 stopped. I0319 14:00:08.435790 1 recorder.go:75] Recording config/image with fingerprint=3ee3d651768a45c9801b9caa4ce397d95df3ba880ebe73e3b8108dcf1c2f2588 I0319 14:00:08.435806 1 gather.go:177] gatherer "clusterconfig" function "image" took 13.87752ms to process 1 records I0319 14:00:08.436491 1 sca.go:136] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates. Next check is in 8h0m0s I0319 14:00:08.436495 1 cluster_transfer.go:83] checking the availability of cluster transfer. Next check is in 12h0m0s I0319 14:00:08.436621 1 tasks_processing.go:74] worker 22 stopped. I0319 14:00:08.436961 1 gather.go:177] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 20.212777ms to process 0 records W0319 14:00:08.437091 1 operator.go:288] started I0319 14:00:08.437158 1 base_controller.go:76] Waiting for caches to sync for LoggingSyncer I0319 14:00:08.444315 1 gather_logs.go:145] no pods in namespace were found I0319 14:00:08.444331 1 tasks_processing.go:74] worker 25 stopped. I0319 14:00:08.444342 1 gather.go:177] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 22.207575ms to process 0 records I0319 14:00:08.452375 1 tasks_processing.go:74] worker 15 stopped. I0319 14:00:08.452497 1 recorder.go:75] Recording config/proxy with fingerprint=d8297ed6a36cb8d55d0b518c75cefae634a733f1c76b0ba80949130974658177 I0319 14:00:08.452535 1 gather.go:177] gatherer "clusterconfig" function "proxies" took 26.174551ms to process 1 records I0319 14:00:08.452717 1 tasks_processing.go:74] worker 28 stopped. I0319 14:00:08.452918 1 recorder.go:75] Recording config/oauth with fingerprint=81f9ac1122a15f62d78321f9479452cdc873487a80edb138f02e2eda995a810e I0319 14:00:08.452934 1 gather.go:177] gatherer "clusterconfig" function "oauths" took 23.221941ms to process 1 records E0319 14:00:08.452948 1 gather.go:140] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0319 14:00:08.452957 1 gather.go:177] gatherer "clusterconfig" function "machines" took 25.719405ms to process 0 records I0319 14:00:08.452976 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0319 14:00:08.452987 1 gather.go:177] gatherer "clusterconfig" function "overlapping_namespace_uids" took 22.817284ms to process 1 records I0319 14:00:08.452996 1 tasks_processing.go:74] worker 30 stopped. I0319 14:00:08.453004 1 tasks_processing.go:74] worker 6 stopped. I0319 14:00:08.454299 1 prometheus_rules.go:88] Prometheus rules successfully created I0319 14:00:08.456158 1 controller.go:212] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0319 14:00:08.456171 1 controller.go:212] Source periodic-conditional *controllerstatus.Simple is not ready I0319 14:00:08.456174 1 controller.go:212] Source periodic-workloads *controllerstatus.Simple is not ready I0319 14:00:08.456178 1 controller.go:212] Source scaController *sca.Controller is not ready I0319 14:00:08.456181 1 controller.go:212] Source clusterTransferController *clustertransfer.Controller is not ready I0319 14:00:08.456198 1 controller.go:489] The operator is still being initialized I0319 14:00:08.456203 1 controller.go:512] The operator is healthy I0319 14:00:08.459393 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController I0319 14:00:08.459554 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0319 14:00:08.459568 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file E0319 14:00:08.463502 1 cluster_transfer.go:95] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%275608c07e-ea17-4aa3-bf46-947a921087f3%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:33749->172.30.0.10:53: read: connection refused I0319 14:00:08.463518 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%275608c07e-ea17-4aa3-bf46-947a921087f3%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:33749->172.30.0.10:53: read: connection refused I0319 14:00:08.465519 1 base_controller.go:82] Caches are synced for ConfigController I0319 14:00:08.465532 1 base_controller.go:119] Starting #1 worker of ConfigController controller ... I0319 14:00:08.471177 1 tasks_processing.go:74] worker 16 stopped. I0319 14:00:08.473315 1 recorder.go:75] Recording config/pod/openshift-ovn-kubernetes/ovnkube-node-cqltx with fingerprint=6b5472e9b0983e5a3a3821b596f3cb5906fd2df03dadea4848f8a1fbf24a62a8 I0319 14:00:08.473428 1 recorder.go:75] Recording config/running_containers with fingerprint=4706f6bb03ee80bb6b97a512dcedc5a772b52007b5a98ab9e27fcb9e54e22db8 I0319 14:00:08.473463 1 gather.go:177] gatherer "clusterconfig" function "container_images" took 70.255279ms to process 2 records I0319 14:00:08.473683 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=be23d663bd287f58651c7f6dfe4cf55e5ca612577824a1d081b496eff0a8b1fc I0319 14:00:08.473825 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=d9a1a3c6d9d4220907b0835a8a884fb67d18571fe8180372435914e16b20c600 I0319 14:00:08.473853 1 gather.go:177] gatherer "clusterconfig" function "clusterroles" took 57.781354ms to process 2 records I0319 14:00:08.473881 1 tasks_processing.go:74] worker 8 stopped. I0319 14:00:08.475981 1 tasks_processing.go:74] worker 10 stopped. I0319 14:00:08.476000 1 gather.go:177] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 74.386816ms to process 0 records I0319 14:00:08.476989 1 tasks_processing.go:74] worker 5 stopped. I0319 14:00:08.477314 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=328e2f1f01e3489a3d45319fbbce545d4c7a1604a6bbb977d14fb4f0a07ca578 I0319 14:00:08.477387 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=05094e47ac4d7443d8de23e1aae7168826286bb871d426f16f862ece2cd7936b I0319 14:00:08.477420 1 gather.go:177] gatherer "clusterconfig" function "storage_classes" took 47.221535ms to process 2 records I0319 14:00:08.479814 1 tasks_processing.go:74] worker 19 stopped. I0319 14:00:08.480570 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=34b1510b599747e888a6e370562f672895307611f6776b30668993331a0aed4a I0319 14:00:08.480907 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=f8b6b23fe980ec324baa3ca70ce2cfe57f6dcbe21c923feb05627cd669066e02 I0319 14:00:08.480926 1 gather.go:177] gatherer "clusterconfig" function "crds" took 66.048982ms to process 2 records I0319 14:00:08.485330 1 configmapobserver.go:84] configmaps "insights-config" not found W0319 14:00:08.497166 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0319 14:00:08.500798 1 tasks_processing.go:74] worker 21 stopped. I0319 14:00:08.500817 1 gather.go:177] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 74.380282ms to process 0 records I0319 14:00:08.507326 1 tasks_processing.go:74] worker 13 stopped. E0319 14:00:08.507344 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "cluster-monitoring-config" not found E0319 14:00:08.507350 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0319 14:00:08.507354 1 gather.go:140] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0319 14:00:08.507366 1 recorder.go:75] Recording config/configmaps/openshift-config/installer-images/images.json with fingerprint=26b6661162b099a0f5a279859b4f46c867929a79d9a4a41fde4be4e6fe138018 I0319 14:00:08.507407 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0319 14:00:08.507416 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0319 14:00:08.507420 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=c93090eb0d2a4736885abeb79c91680cfd01fda46464f83456b085d4dc8239f0 I0319 14:00:08.507424 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0319 14:00:08.507462 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0319 14:00:08.507470 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0319 14:00:08.507475 1 gather.go:177] gatherer "clusterconfig" function "config_maps" took 84.909923ms to process 7 records I0319 14:00:08.510892 1 tasks_processing.go:74] worker 0 stopped. I0319 14:00:08.511181 1 recorder.go:75] Recording config/node/ip-10-0-0-28.ec2.internal with fingerprint=0aa3f2203a7ba4a65e7c4f02827f4e86746311d136daf44ef8e7e0b7215978be I0319 14:00:08.511266 1 recorder.go:75] Recording config/node/ip-10-0-1-137.ec2.internal with fingerprint=5973dd7ef8c0bc35cdfb6fc474b521639d5ef38099b82e6493ac1877c0782f37 I0319 14:00:08.511324 1 recorder.go:75] Recording config/node/ip-10-0-2-97.ec2.internal with fingerprint=e29c6843e5358a8cc055a98b5ba3ac99dd839deb088be62c9dce9b60499281c8 I0319 14:00:08.511334 1 gather.go:177] gatherer "clusterconfig" function "nodes" took 88.799507ms to process 3 records I0319 14:00:08.513708 1 tasks_processing.go:74] worker 7 stopped. I0319 14:00:08.513735 1 recorder.go:75] Recording aggregated/unused_machine_configs_count with fingerprint=4bfc9fa984e5dfcd45848faaf05269de7619bf42edf9f781751af5ee05c1a499 I0319 14:00:08.513752 1 gather.go:177] gatherer "clusterconfig" function "machine_configs" took 103.228785ms to process 1 records I0319 14:00:08.525810 1 requests.go:205] Asking for SCA certificate with "{"arch": ["x86_64"]}" payload W0319 14:00:08.530725 1 sca.go:161] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:52634->172.30.0.10:53: read: connection refused I0319 14:00:08.530739 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.128.0.13:52634->172.30.0.10:53: read: connection refused I0319 14:00:08.537907 1 base_controller.go:82] Caches are synced for LoggingSyncer I0319 14:00:08.537920 1 base_controller.go:119] Starting #1 worker of LoggingSyncer controller ... I0319 14:00:08.552613 1 tasks_processing.go:74] worker 1 stopped. I0319 14:00:08.552904 1 recorder.go:75] Recording config/version with fingerprint=1bbbf73a7f21128431dff2975daf0703444ea52754ed0ce69788a7ebe0335341 I0319 14:00:08.552923 1 recorder.go:75] Recording config/id with fingerprint=a59834817b68a2f6e7e51a106d5b0a568fef722ef30ebde2a7f4aae65deaeba3 I0319 14:00:08.552934 1 gather.go:177] gatherer "clusterconfig" function "version" took 151.318344ms to process 2 records I0319 14:00:08.581997 1 tasks_processing.go:74] worker 2 stopped. E0319 14:00:08.582014 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0319 14:00:08.582020 1 gather.go:140] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2p4qu4f22mlbdnpskeg8ajj9mnqkh5tm-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2p4qu4f22mlbdnpskeg8ajj9mnqkh5tm-primary-cert-bundle-secret" not found I0319 14:00:08.582087 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=9ea662ad69b044a228de8d523f89af6b622071514cc96722bd502fc0c16a7cd9 I0319 14:00:08.582101 1 gather.go:177] gatherer "clusterconfig" function "ingress_certificates" took 181.612325ms to process 1 records I0319 14:00:08.596371 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0319 14:00:08.601039 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found I0319 14:00:08.951352 1 gather_cluster_operator_pods_and_events.go:121] Found 18 pods with 21 containers I0319 14:00:08.951365 1 gather_cluster_operator_pods_and_events.go:235] Maximum buffer size: 1198372 bytes I0319 14:00:08.951541 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-rcjmk pod in namespace openshift-dns (previous: false). I0319 14:00:09.193620 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rcjmk pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-rcjmk\" is waiting to start: ContainerCreating" I0319 14:00:09.193640 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-rcjmk\" is waiting to start: ContainerCreating" I0319 14:00:09.193652 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-rcjmk pod in namespace openshift-dns (previous: false). I0319 14:00:09.353480 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rcjmk pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-rcjmk\" is waiting to start: ContainerCreating" I0319 14:00:09.353496 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-rcjmk\" is waiting to start: ContainerCreating" I0319 14:00:09.353510 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-rj26w pod in namespace openshift-dns (previous: false). W0319 14:00:09.497468 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0319 14:00:09.576024 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rj26w pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-rj26w\" is waiting to start: ContainerCreating" I0319 14:00:09.576039 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-rj26w\" is waiting to start: ContainerCreating" I0319 14:00:09.576048 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-rj26w pod in namespace openshift-dns (previous: false). I0319 14:00:09.752830 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-rj26w pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-rj26w\" is waiting to start: ContainerCreating" I0319 14:00:09.752849 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-rj26w\" is waiting to start: ContainerCreating" I0319 14:00:09.752860 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns container dns-default-wjskq pod in namespace openshift-dns (previous: false). I0319 14:00:09.970689 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wjskq pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-wjskq\" is waiting to start: ContainerCreating" I0319 14:00:09.970708 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"dns\" in pod \"dns-default-wjskq\" is waiting to start: ContainerCreating" I0319 14:00:09.970718 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for kube-rbac-proxy container dns-default-wjskq pod in namespace openshift-dns (previous: false). I0319 14:00:10.108113 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0319 14:00:10.168723 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for dns-default-wjskq pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-wjskq\" is waiting to start: ContainerCreating" I0319 14:00:10.168744 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-wjskq\" is waiting to start: ContainerCreating" I0319 14:00:10.168758 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-5sszd pod in namespace openshift-dns (previous: false). I0319 14:00:10.352644 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:10.352668 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-k2xs8 pod in namespace openshift-dns (previous: false). W0319 14:00:10.497494 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0319 14:00:10.553327 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:10.553352 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for dns-node-resolver container node-resolver-prgsq pod in namespace openshift-dns (previous: false). I0319 14:00:10.700576 1 tasks_processing.go:74] worker 14 stopped. I0319 14:00:10.700626 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=e2375e0856af03c4807e37170575bbdb66f2301d9d91bce081d171767a32f057 I0319 14:00:10.700652 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=acc7807fca525716deab07813e6df36f133237f12bc43bca3c39ef946cb3170a I0319 14:00:10.700679 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0319 14:00:10.700704 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=adda94ac16c9784dfca937f9927ce7b45bde64d90c49330632bd17db6d91691b I0319 14:00:10.700723 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0319 14:00:10.700744 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=ea20f0f5aef76251997f13d39850db22b56e272b2fd5f8dbf1828cbbd8afc3f6 I0319 14:00:10.700775 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=bd0561ecfc93701e6ffe111c7e635f13f16db46916936c3385adc776ba05720d I0319 14:00:10.700811 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=1a45e906ddda45d395f291c3e1d8bcaeb07a778cb0aa9b76c1b5e4852a6d53eb I0319 14:00:10.700843 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=d105e1892c5162fab18fb703d80ddaf8da2df24d1992cfbae0732ed4e7dbd762 I0319 14:00:10.700852 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=e5ff11d57817f84a678f6fa9565af55bd1120227c16a21933637ab62675a6d70 I0319 14:00:10.700869 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=b6ceb617795a064cd75a295026c42e96758e450c8793ed0997fa83023067fb83 I0319 14:00:10.700878 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0319 14:00:10.700895 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=8cbbc732417573217da32e68de0bc499e4d014338efd525b83199df91cde0d50 I0319 14:00:10.700905 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0319 14:00:10.700920 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=380a90f3c30070dc7395a6df24ce5455c0891570017821d73f47df1e8e7fad31 I0319 14:00:10.700943 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0319 14:00:10.700958 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=41281dbd824303eef81894cdfecfaef6d1d3edad29d659220df9b2a4fac439a0 I0319 14:00:10.700965 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0319 14:00:10.700980 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=3136f65e6f292bf46e48556c23aa43ca51ebafcdea361275897c536602da2e2e I0319 14:00:10.701085 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=7ce2875d3a3b8c4dd4522ff5f117e84b77a95ce4301325f5d8281797b44d2e86 I0319 14:00:10.701094 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0319 14:00:10.701103 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0319 14:00:10.701125 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0319 14:00:10.701146 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=63e6af2e815dc03eba43ba9e21e2f6fed1ff4c8aeeaa97701acd530f8732242e I0319 14:00:10.701169 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=9fd0fc5a50e4629758dc04cd2477ff525cf5746c95e7a352a0afce10dc2b79ff I0319 14:00:10.701178 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0319 14:00:10.701192 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=6e14d04daf466d46a1e722b95d8807ebce0941ff14f523fd27ec58d6cddbbb33 I0319 14:00:10.701199 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0319 14:00:10.701211 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=b8d75e3d1c498af59f5f5ad6a1051e15a024954b7ca135c7fe34c8eefdab9c92 I0319 14:00:10.701226 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=3cecd02f62620e2075c214fc47c5aa5bcbcb2c622dda5bc91dde1be5bf1ee2e3 I0319 14:00:10.701255 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=cad16f74229884850048faac6ae271200dfaa2d6c1119ad120d2b0bc4a46544e I0319 14:00:10.701274 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=c0eabc4d3dfe077a494a184d26f0991c850f69284c7af0cb5cd4411bdea12df8 I0319 14:00:10.701293 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=d8dda5c1df29c38427fb7ccd4e4d8240222e608be8596ef0ed98814b2b3f9a43 I0319 14:00:10.701302 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0319 14:00:10.701325 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=9efbd1a99c18120020790d8f879886ab86000f266136e3ee2845055d5570d2e2 I0319 14:00:10.701342 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=510064d6f6bcced87ab5bd2ddaff3d0edd7f93f4a4f7af2641f29fc53ffab21e I0319 14:00:10.701348 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0319 14:00:10.701355 1 gather.go:177] gatherer "clusterconfig" function "operators" took 2.291451242s to process 37 records I0319 14:00:10.753556 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:10.753572 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-68c89d6b74-j8qv9 pod in namespace openshift-image-registry (previous: false). I0319 14:00:10.956446 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-68c89d6b74-j8qv9 pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-68c89d6b74-j8qv9\" is waiting to start: ContainerCreating" I0319 14:00:10.956462 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-68c89d6b74-j8qv9\" is waiting to start: ContainerCreating" I0319 14:00:10.956473 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-977ff5569-cmvvc pod in namespace openshift-image-registry (previous: false). I0319 14:00:11.164281 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-977ff5569-cmvvc pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-977ff5569-cmvvc\" is waiting to start: ContainerCreating" I0319 14:00:11.164301 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-977ff5569-cmvvc\" is waiting to start: ContainerCreating" I0319 14:00:11.164312 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for registry container image-registry-977ff5569-cp25r pod in namespace openshift-image-registry (previous: false). I0319 14:00:11.352735 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for image-registry-977ff5569-cp25r pod in namespace openshift-image-registry for failing operator registry (previous: false): "container \"registry\" in pod \"image-registry-977ff5569-cp25r\" is waiting to start: ContainerCreating" I0319 14:00:11.352755 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"registry\" in pod \"image-registry-977ff5569-cp25r\" is waiting to start: ContainerCreating" I0319 14:00:11.352771 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-nfqxc pod in namespace openshift-image-registry (previous: false). W0319 14:00:11.498381 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0319 14:00:11.552462 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:11.552488 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-sxvzv pod in namespace openshift-image-registry (previous: false). I0319 14:00:11.752389 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:11.752412 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for node-ca container node-ca-wh8ds pod in namespace openshift-image-registry (previous: false). I0319 14:00:11.952938 1 gather_cluster_operator_pods_and_events.go:280] Error: "log buffer is empty" I0319 14:00:11.952961 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-55957fc79-sn68r pod in namespace openshift-ingress (previous: false). I0319 14:00:12.152647 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-55957fc79-sn68r pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-55957fc79-sn68r\" is waiting to start: ContainerCreating" I0319 14:00:12.152671 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-55957fc79-sn68r\" is waiting to start: ContainerCreating" I0319 14:00:12.152687 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-75d6856fff-p6qk6 pod in namespace openshift-ingress (previous: false). I0319 14:00:12.352959 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-75d6856fff-p6qk6 pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-75d6856fff-p6qk6\" is waiting to start: ContainerCreating" I0319 14:00:12.352981 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-75d6856fff-p6qk6\" is waiting to start: ContainerCreating" I0319 14:00:12.352996 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for router container router-default-75d6856fff-qc88t pod in namespace openshift-ingress (previous: false). W0319 14:00:12.497607 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0319 14:00:12.554109 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for router-default-75d6856fff-qc88t pod in namespace openshift-ingress for failing operator router (previous: false): "container \"router\" in pod \"router-default-75d6856fff-qc88t\" is waiting to start: ContainerCreating" I0319 14:00:12.554127 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"router\" in pod \"router-default-75d6856fff-qc88t\" is waiting to start: ContainerCreating" I0319 14:00:12.554138 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-4xt2m pod in namespace openshift-ingress-canary (previous: false). I0319 14:00:12.768708 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-4xt2m pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-4xt2m\" is waiting to start: ContainerCreating" I0319 14:00:12.768730 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-4xt2m\" is waiting to start: ContainerCreating" I0319 14:00:12.768741 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-5mbrw pod in namespace openshift-ingress-canary (previous: false). I0319 14:00:12.955555 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-5mbrw pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-5mbrw\" is waiting to start: ContainerCreating" I0319 14:00:12.955574 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-5mbrw\" is waiting to start: ContainerCreating" I0319 14:00:12.955587 1 gather_cluster_operator_pods_and_events.go:365] Fetching logs for serve-healthcheck-canary container ingress-canary-dkpk2 pod in namespace openshift-ingress-canary (previous: false). I0319 14:00:13.151774 1 gather_cluster_operator_pods_and_events.go:408] Failed to fetch log for ingress-canary-dkpk2 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-dkpk2\" is waiting to start: ContainerCreating" I0319 14:00:13.151793 1 gather_cluster_operator_pods_and_events.go:280] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-dkpk2\" is waiting to start: ContainerCreating" I0319 14:00:13.151813 1 tasks_processing.go:74] worker 3 stopped. I0319 14:00:13.151973 1 recorder.go:75] Recording events/openshift-dns with fingerprint=1e08379811a25be9ccc4e3082f867dbfc98e23f0a835a105e388ce7ae08574bb I0319 14:00:13.152145 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=8a8e01cc2eaa58a85aa5d650645c77c47cfb9f0b8bc6e219b2da34edf3cf578a I0319 14:00:13.152202 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=f5bafa2caf4a0d05c0e66a249b61d4aff58276bf7fddac3ec203f2a3b4ce88cd I0319 14:00:13.152303 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=1066886896550e0fbc23e878fc998e966d610367e4e0fd6e903fc939f45387b2 I0319 14:00:13.152331 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=a86bff0ec4167f9dbf1d24b09d03eb79f76eb95d3b2534e700ea3c86e55e071b I0319 14:00:13.152344 1 gather.go:177] gatherer "clusterconfig" function "operators_pods_and_events" took 4.72472457s to process 5 records W0319 14:00:13.497187 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0319 14:00:13.497221 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0319 14:00:13.497255 1 tasks_processing.go:74] worker 18 stopped. E0319 14:00:13.497273 1 gather.go:140] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0319 14:00:13.497292 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 W0319 14:00:13.497307 1 gather.go:155] issue recording gatherer "clusterconfig" function "dvo_metrics" result "config/dvo_metrics" because of the warning: warning: the record with the same fingerprint "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" was already recorded at path "config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt", recording another one with a different path "config/dvo_metrics" I0319 14:00:13.497337 1 gather.go:177] gatherer "clusterconfig" function "dvo_metrics" took 5.066716979s to process 1 records I0319 14:00:20.844646 1 tasks_processing.go:74] worker 11 stopped. I0319 14:00:20.844683 1 recorder.go:75] Recording config/installplans with fingerprint=7b887df561a3a9e6ef0dc672845aa5d56e348505006b7496d3a2f83892b0c95b I0319 14:00:20.844696 1 gather.go:177] gatherer "clusterconfig" function "install_plans" took 12.443470704s to process 1 records I0319 14:00:21.328173 1 configmapobserver.go:84] configmaps "insights-config" not found I0319 14:00:21.839050 1 tasks_processing.go:74] worker 9 stopped. I0319 14:00:21.839326 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=3ecf7700bcefa3b220b7a57276431dc4453af819271aad3c4cc2173bbf6c5832 I0319 14:00:21.839343 1 gather.go:177] gatherer "clusterconfig" function "service_accounts" took 13.40888256s to process 1 records E0319 14:00:21.839396 1 periodic.go:247] "Unhandled Error" err="clusterconfig failed after 13.439s with: function \"machine_healthchecks\" failed with an error, function \"pod_network_connectivity_checks\" failed with an error, function \"support_secret\" failed with an error, function \"machines\" failed with an error, function \"config_maps\" failed with an error, function \"ingress_certificates\" failed with an error, function \"dvo_metrics\" failed with an error" I0319 14:00:21.840509 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "support_secret" failed with an error, function "machines" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error I0319 14:00:21.840526 1 periodic.go:209] Running workloads gatherer I0319 14:00:21.840548 1 tasks_processing.go:45] number of workers: 2 I0319 14:00:21.840564 1 tasks_processing.go:69] worker 1 listening for tasks. I0319 14:00:21.840570 1 tasks_processing.go:71] worker 1 working on workload_info task. I0319 14:00:21.840576 1 tasks_processing.go:69] worker 0 listening for tasks. I0319 14:00:21.840590 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0319 14:00:21.866284 1 gather_workloads_info.go:278] Loaded pods in 0s, will wait 22s for image data I0319 14:00:21.874902 1 gather_workloads_info.go:387] No image sha256:04c87c054a3f366a7dfbe0a93ebb0c80a098ee16842c5794b67c1202eec61996 (10ms) I0319 14:00:21.876512 1 tasks_processing.go:74] worker 0 stopped. I0319 14:00:21.876530 1 gather.go:177] gatherer "workloads" function "helmchart_info" took 35.908609ms to process 0 records I0319 14:00:21.882962 1 gather_workloads_info.go:387] No image sha256:765f0d23b637f685f98a31bd47c131b03cf72a40761a3f9a9d6320faa3c33733 (8ms) I0319 14:00:21.890486 1 gather_workloads_info.go:387] No image sha256:2904a78e2eb73fd6a9bb94c105c2a056831fb4113fbb7b0607c50adc9d879c9b (8ms) I0319 14:00:21.898449 1 gather_workloads_info.go:387] No image sha256:c15ca0c0ad60fe8757c2d5d1723fcdd7a1ed6c0251a90d22a7e6cae6811d01aa (8ms) I0319 14:00:21.906321 1 gather_workloads_info.go:387] No image sha256:289816958633a763a72dbc44e1dad40466223164e7e253039514f0d974ea5d21 (8ms) I0319 14:00:21.913870 1 gather_workloads_info.go:387] No image sha256:653c666f842c13e0baae2e89a9b1efe0e2ef56f621ffb5b32005115d2a26ab8c (8ms) I0319 14:00:21.921492 1 gather_workloads_info.go:387] No image sha256:56a85660a445eced5c79a595a0eccf590087c5672d50f49d4c25ad52f9a44f04 (8ms) I0319 14:00:21.928577 1 gather_workloads_info.go:387] No image sha256:521712486e2c6e3c020dad6a1cb340db8e55665b69f7c208fab9cd9e965fd588 (7ms) I0319 14:00:21.935615 1 gather_workloads_info.go:387] No image sha256:a258c226562adb14e3a163a1940938526ee6a0928982a7667d85d9a7334ce639 (7ms) I0319 14:00:21.943484 1 gather_workloads_info.go:387] No image sha256:7b31223098f08328f5ddea8e5b871dbbd5f5a61ec550e8956f66793c0c6031a9 (8ms) I0319 14:00:21.973662 1 gather_workloads_info.go:387] No image sha256:2598489729a4b258e4ecda4a06f6875133f2a10ced5c5241f8a57a8a05418e36 (30ms) I0319 14:00:22.075262 1 gather_workloads_info.go:387] No image sha256:695cf2f0cc07683c2a3ce1eaf3e56fe18abc6e2bac716f7d9843f5d173b9df52 (102ms) I0319 14:00:22.177355 1 gather_workloads_info.go:387] No image sha256:a498046d64605bcccee2440aa4f04a4602baaae263cf01d977ec5208e876b1fd (102ms) I0319 14:00:22.273860 1 gather_workloads_info.go:387] No image sha256:2e564f336c77116053f34d4201d364d8da04e789cfffa0ea422574c95f2d6404 (96ms) I0319 14:00:22.374149 1 gather_workloads_info.go:387] No image sha256:2e57e192c3c1240fd935dcd55c8fde5e70e78bf81d6176c96edf21fafe59f8ba (100ms) I0319 14:00:22.474508 1 gather_workloads_info.go:387] No image sha256:b3909bf664c77097f75b3768830863d642eed3815dab2bfb4415c771ca2d5007 (100ms) I0319 14:00:22.573867 1 gather_workloads_info.go:387] No image sha256:5a95c19d82767e0235b4edb4a0536482c816904897aae1dc3eb255cb52b87a9f (99ms) I0319 14:00:22.677504 1 gather_workloads_info.go:387] No image sha256:4556896f77307821531ef91b7b7faccb82b824ea695693b2989f597f0deca038 (104ms) I0319 14:00:22.774196 1 gather_workloads_info.go:387] No image sha256:a0105d1eb62cf6ac9e5e2ef28d3e89bf6dc514bc594fc7090fe5a5ee18a09c87 (97ms) I0319 14:00:22.808238 1 configmapobserver.go:84] configmaps "insights-config" not found I0319 14:00:22.873795 1 gather_workloads_info.go:387] No image sha256:e84cb128d930bd1ab867cc89b7b7bf2b2c0e41105ab93b5381069945b3ee9c57 (100ms) I0319 14:00:22.974397 1 gather_workloads_info.go:387] No image sha256:a56211d075aa43cbb491f669a5b2e46ee023dc95b7d51dbac28f463948c5ad61 (101ms) I0319 14:00:23.010784 1 configmapobserver.go:84] configmaps "insights-config" not found I0319 14:00:23.073925 1 gather_workloads_info.go:387] No image sha256:0a99240166165eb5718e7516a43282fe32df9c7c5e809b31b58abe44e42ff94d (100ms) I0319 14:00:23.173999 1 gather_workloads_info.go:387] No image sha256:47154813651033d59751fb655a384dbffb64dd26f10bd7f3be0c3128d0486356 (100ms) I0319 14:00:23.273457 1 gather_workloads_info.go:387] No image sha256:7adc1eab05d6724c76ba751f6df816b08d6e70b78dee9eb94fa6fd9690542c98 (99ms) I0319 14:00:23.374683 1 gather_workloads_info.go:387] No image sha256:1a2532940843248c57d52141185dd71fbc393ab28b65d48f682038632c1dbbad (101ms) I0319 14:00:23.481553 1 gather_workloads_info.go:387] No image sha256:36b9e89c3cfcf1ab9ae500486e38afb6862cba48cb0b4d84a09508ab8f3d299f (107ms) I0319 14:00:23.574435 1 gather_workloads_info.go:387] No image sha256:ca1344cb64140188b7cae7bbc51fb751566c0b0c97d5e39b5850e628032c4a5e (93ms) I0319 14:00:23.673922 1 gather_workloads_info.go:387] No image sha256:03cf4cd7ef1518610c6c7b3ad27d1622d82e98e3dc6e3f8e5d0fceb5c8d3786e (99ms) I0319 14:00:23.774381 1 gather_workloads_info.go:387] No image sha256:5f0b67cfbbc381243fb91ccc17345b56d05f4d717c667e8c644e5bf05633ba71 (100ms) I0319 14:00:23.874277 1 gather_workloads_info.go:387] No image sha256:c940ea87e7d133d75ba0002ef00c0806825eed3db8094cdb260d1bac18127733 (100ms) I0319 14:00:23.974170 1 gather_workloads_info.go:387] No image sha256:943018739e3db1763c3184b460dbc409e058abbac76d57b9927faad317be85e4 (100ms) I0319 14:00:24.079942 1 gather_workloads_info.go:387] No image sha256:91828234f107c068c8a4966d08370ae7b73e637651dbc6d92c18c4553402c22c (106ms) I0319 14:00:24.079975 1 tasks_processing.go:74] worker 1 stopped. E0319 14:00:24.079985 1 gather.go:140] gatherer "workloads" function "workload_info" failed with the error: no running pods found for the insights-runtime-extractor statefulset I0319 14:00:24.080305 1 recorder.go:75] Recording config/workload_info with fingerprint=0deb9e0582797ef8a984d096fa5a99c047027610071c5bf8c98b88648d060f1d I0319 14:00:24.080321 1 gather.go:177] gatherer "workloads" function "workload_info" took 2.239396715s to process 1 records E0319 14:00:24.080357 1 periodic.go:247] "Unhandled Error" err="workloads failed after 2.239s with: function \"workload_info\" failed with an error" I0319 14:00:24.081459 1 controllerstatus.go:89] name=periodic-workloads healthy=false reason=PeriodicGatherFailed message=Source workloads could not be retrieved: function "workload_info" failed with an error I0319 14:00:24.081472 1 periodic.go:209] Running conditional gatherer I0319 14:00:24.087208 1 requests.go:294] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules I0319 14:00:24.093576 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:48603->172.30.0.10:53: read: connection refused E0319 14:00:24.093804 1 conditional_gatherer.go:322] unable to update alerts cache: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory I0319 14:00:24.093862 1 conditional_gatherer.go:384] updating version cache for conditional gatherer I0319 14:00:24.103058 1 conditional_gatherer.go:392] cluster version is '4.20.8' E0319 14:00:24.103072 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103077 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103080 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103084 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103088 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103092 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103094 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103097 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing E0319 14:00:24.103100 1 conditional_gatherer.go:209] error checking conditions for a gathering rule: alerts cache is missing I0319 14:00:24.103113 1 tasks_processing.go:45] number of workers: 3 I0319 14:00:24.103123 1 tasks_processing.go:69] worker 2 listening for tasks. I0319 14:00:24.103129 1 tasks_processing.go:71] worker 2 working on remote_configuration task. I0319 14:00:24.103132 1 tasks_processing.go:69] worker 0 listening for tasks. I0319 14:00:24.103140 1 tasks_processing.go:69] worker 1 listening for tasks. I0319 14:00:24.103150 1 tasks_processing.go:74] worker 1 stopped. I0319 14:00:24.103152 1 tasks_processing.go:71] worker 2 working on conditional_gatherer_rules task. I0319 14:00:24.103157 1 tasks_processing.go:71] worker 0 working on rapid_container_logs task. I0319 14:00:24.103195 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0319 14:00:24.103212 1 gather.go:177] gatherer "conditional" function "remote_configuration" took 1.088µs to process 1 records I0319 14:00:24.103261 1 tasks_processing.go:74] worker 2 stopped. I0319 14:00:24.103315 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0319 14:00:24.103329 1 gather.go:177] gatherer "conditional" function "conditional_gatherer_rules" took 788ns to process 1 records I0319 14:00:24.103476 1 tasks_processing.go:74] worker 0 stopped. I0319 14:00:24.103488 1 gather.go:177] gatherer "conditional" function "rapid_container_logs" took 303.418µs to process 0 records I0319 14:00:24.103503 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.20.8/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.128.0.13:48603->172.30.0.10:53: read: connection refused I0319 14:00:24.103517 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 I0319 14:00:24.128370 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=ec609bf0b25d7af5c054c2170b3703b3d785b286e274ac8078fd94d252519210 I0319 14:00:24.128485 1 diskrecorder.go:70] Writing 91 records to /var/lib/insights-operator/insights-2026-03-19-140024.tar.gz I0319 14:00:24.134187 1 diskrecorder.go:51] Wrote 91 records to disk in 5ms I0319 14:00:24.134216 1 periodic.go:278] Gathering cluster info every 2h0m0s I0319 14:00:24.134230 1 periodic.go:279] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/entitlement_certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0319 14:00:31.088443 1 configmapobserver.go:84] configmaps "insights-config" not found I0319 14:01:37.948377 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="a7d042b54cbfb67c66485dab2f8223ce951526c8c1c39d7100795c47845ea536") W0319 14:01:37.948410 1 builder.go:160] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created I0319 14:01:37.948455 1 observer_polling.go:111] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="c7cc47b73e35d9ca63590b3d1333b2949d95f4c2e46a51b308e417cc5986501e") I0319 14:01:37.948496 1 base_controller.go:181] Shutting down ConfigController ... I0319 14:01:37.948517 1 periodic.go:170] Shutting down I0319 14:01:37.948533 1 base_controller.go:181] Shutting down LoggingSyncer ... I0319 14:01:37.948549 1 base_controller.go:123] Shutting down worker of ConfigController controller ... I0319 14:01:37.948561 1 observer_polling.go:111] Observed file "/var/run/configmaps/service-ca-bundle/service-ca.crt" has been created (hash="fb1429061eae0434b410632fb648c42f03e5f7e05aa761d7e68f61fc6b5b5170") I0319 14:01:37.948572 1 base_controller.go:123] Shutting down worker of LoggingSyncer controller ... I0319 14:01:37.948682 1 base_controller.go:113] All LoggingSyncer workers have been terminated I0319 14:01:37.948588 1 genericapiserver.go:693] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" E0319 14:01:37.948684 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0319 14:01:37.948733 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled E0319 14:01:37.948770 1 controller.go:299] Unable to write cluster operator status: client rate limiter Wait returned an error: context canceled I0319 14:01:37.948596 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector