I0123 13:44:54.780080 1 cmd.go:241] Using service-serving-cert provided certificates I0123 13:44:54.780351 1 observer_polling.go:159] Starting file observer I0123 13:44:55.324526 1 operator.go:59] Starting insights-operator v0.0.0-master+$Format:%H$ I0123 13:44:55.324726 1 legacy_config.go:327] Current config: {"report":false,"storagePath":"/var/lib/insights-operator","interval":"2h","endpoint":"https://console.redhat.com/api/ingress/v1/upload","conditionalGathererEndpoint":"https://console.redhat.com/api/gathering/v2/%s/gathering_rules","pull_report":{"endpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports","delay":"60s","timeout":"3000s","min_retry":"30s"},"impersonate":"system:serviceaccount:openshift-insights:gather","enableGlobalObfuscation":false,"ocm":{"scaEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/certificates","scaInterval":"8h","scaDisabled":false,"clusterTransferEndpoint":"https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/","clusterTransferInterval":"12h"},"disableInsightsAlerts":false,"processingStatusEndpoint":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status","reportEndpointTechPreview":"https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/report"} I0123 13:44:55.325524 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0123 13:44:55.325616 1 secure_serving.go:57] Forcing use of http/1.1 only W0123 13:44:55.325647 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W0123 13:44:55.325655 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. W0123 13:44:55.325661 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected. W0123 13:44:55.325667 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected. W0123 13:44:55.325669 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected. W0123 13:44:55.325671 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected. I0123 13:44:55.330336 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-insights", Name:"insights-operator", UID:"a8709cff-17c1-42f7-8e71-07c4e424b625", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallPowerVS", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "ExternalOIDC", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "GCPClusterHostedDNS", "GatewayAPI", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesSupport", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} I0123 13:44:55.330386 1 operator.go:124] FeatureGates initialized: knownFeatureGates=[AWSEFSDriverVolumeMetrics AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BootcNodeManagement BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere ClusterMonitoringConfig DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages ManagedBootImagesAWS MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation MultiArchInstallAWS MultiArchInstallAzure MultiArchInstallGCP NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NodeDisruptionPolicy NodeSwap OVNObservability OnClusterBuild OpenShiftPodSecurityAdmission PersistentIPsForVirtualization PinnedImages PlatformOperators PrivateHostedZoneAWS ProcMountType RouteAdvertisements RouteExternalCertificate ServiceAccountTokenNodeBinding SetEIPForNLBIngressController SignatureStores SigstoreImageVerification StreamingCollectionEncodingToJSON StreamingCollectionEncodingToProtobuf TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesSupport VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] I0123 13:44:55.332005 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0123 13:44:55.332012 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0123 13:44:55.332024 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController I0123 13:44:55.332005 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0123 13:44:55.332038 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0123 13:44:55.332024 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0123 13:44:55.332200 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" I0123 13:44:55.332306 1 secure_serving.go:213] Serving securely on [::]:8443 I0123 13:44:55.332340 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0123 13:44:55.339095 1 configmapobserver.go:64] Cannot get the configuration config map: configmaps "insights-config" not found. Default configuration is used. I0123 13:44:55.339125 1 secretconfigobserver.go:216] Legacy configuration set: enabled=false endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=false reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0123 13:44:55.339154 1 base_controller.go:67] Waiting for caches to sync for ConfigController I0123 13:44:55.344641 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0123 13:44:55.344664 1 secretconfigobserver.go:204] Legacy configuration updated: enabled=true endpoint=https://console.redhat.com/api/ingress/v1/upload conditional_gatherer_endpoint=https://console.redhat.com/api/gathering/v2/%s/gathering_rules interval=2h0m0s token=true reportEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports initialPollingDelay=1m0s minRetryTime=30s pollingTimeout=50m0s processingStatusEndpoint=https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/request/%s/status I0123 13:44:55.350172 1 secretconfigobserver.go:119] support secret does not exist I0123 13:44:55.355289 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0123 13:44:55.360241 1 secretconfigobserver.go:119] support secret does not exist I0123 13:44:55.364122 1 recorder.go:161] Pruning old reports every 5h16m21s, max age is 288h0m0s I0123 13:44:55.370434 1 controllerstatus.go:80] name=insightsuploader healthy=true reason= message= I0123 13:44:55.370446 1 controllerstatus.go:80] name=insightsreport healthy=true reason= message= I0123 13:44:55.370450 1 insightsuploader.go:86] Reporting status periodically to https://console.redhat.com/api/ingress/v1/upload every 2h0m0s, starting in 1m30s I0123 13:44:55.370453 1 insightsreport.go:296] Starting report retriever I0123 13:44:55.370459 1 insightsreport.go:298] Insights analysis reports will be downloaded from the https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports endpoint with a delay of 1m0s I0123 13:44:55.370479 1 periodic.go:214] Running clusterconfig gatherer I0123 13:44:55.370545 1 tasks_processing.go:45] number of workers: 64 I0123 13:44:55.370577 1 tasks_processing.go:69] worker 2 listening for tasks. I0123 13:44:55.370585 1 tasks_processing.go:69] worker 7 listening for tasks. I0123 13:44:55.370587 1 tasks_processing.go:69] worker 0 listening for tasks. I0123 13:44:55.370591 1 tasks_processing.go:69] worker 3 listening for tasks. I0123 13:44:55.370592 1 tasks_processing.go:69] worker 1 listening for tasks. I0123 13:44:55.370596 1 tasks_processing.go:69] worker 4 listening for tasks. I0123 13:44:55.370598 1 tasks_processing.go:69] worker 10 listening for tasks. I0123 13:44:55.370601 1 tasks_processing.go:69] worker 5 listening for tasks. I0123 13:44:55.370603 1 tasks_processing.go:69] worker 8 listening for tasks. I0123 13:44:55.370605 1 tasks_processing.go:69] worker 6 listening for tasks. I0123 13:44:55.370608 1 tasks_processing.go:69] worker 9 listening for tasks. I0123 13:44:55.370607 1 tasks_processing.go:69] worker 13 listening for tasks. I0123 13:44:55.370611 1 tasks_processing.go:69] worker 11 listening for tasks. I0123 13:44:55.370612 1 tasks_processing.go:69] worker 12 listening for tasks. I0123 13:44:55.370617 1 tasks_processing.go:69] worker 20 listening for tasks. I0123 13:44:55.370618 1 tasks_processing.go:69] worker 40 listening for tasks. I0123 13:44:55.370619 1 tasks_processing.go:69] worker 16 listening for tasks. I0123 13:44:55.370623 1 tasks_processing.go:69] worker 17 listening for tasks. I0123 13:44:55.370625 1 tasks_processing.go:69] worker 28 listening for tasks. I0123 13:44:55.370623 1 tasks_processing.go:69] worker 27 listening for tasks. I0123 13:44:55.370628 1 tasks_processing.go:69] worker 15 listening for tasks. I0123 13:44:55.370631 1 tasks_processing.go:69] worker 29 listening for tasks. I0123 13:44:55.370627 1 tasks_processing.go:69] worker 18 listening for tasks. I0123 13:44:55.370636 1 tasks_processing.go:69] worker 21 listening for tasks. I0123 13:44:55.370633 1 tasks_processing.go:69] worker 19 listening for tasks. I0123 13:44:55.370636 1 tasks_processing.go:69] worker 14 listening for tasks. I0123 13:44:55.370640 1 tasks_processing.go:69] worker 31 listening for tasks. I0123 13:44:55.370642 1 tasks_processing.go:69] worker 25 listening for tasks. I0123 13:44:55.370639 1 tasks_processing.go:71] worker 18 working on monitoring_persistent_volumes task. I0123 13:44:55.370648 1 tasks_processing.go:69] worker 41 listening for tasks. I0123 13:44:55.370649 1 tasks_processing.go:71] worker 21 working on ceph_cluster task. I0123 13:44:55.370651 1 tasks_processing.go:69] worker 33 listening for tasks. I0123 13:44:55.370651 1 tasks_processing.go:71] worker 25 working on openstack_dataplanedeployments task. I0123 13:44:55.370656 1 tasks_processing.go:69] worker 39 listening for tasks. I0123 13:44:55.370659 1 tasks_processing.go:71] worker 33 working on oauths task. I0123 13:44:55.370661 1 tasks_processing.go:69] worker 34 listening for tasks. I0123 13:44:55.370660 1 tasks_processing.go:69] worker 42 listening for tasks. I0123 13:44:55.370655 1 tasks_processing.go:69] worker 24 listening for tasks. I0123 13:44:55.370646 1 tasks_processing.go:69] worker 36 listening for tasks. I0123 13:44:55.370645 1 tasks_processing.go:71] worker 14 working on certificate_signing_requests task. I0123 13:44:55.370665 1 tasks_processing.go:69] worker 38 listening for tasks. I0123 13:44:55.370680 1 tasks_processing.go:71] worker 31 working on openshift_logging task. I0123 13:44:55.370669 1 tasks_processing.go:69] worker 43 listening for tasks. I0123 13:44:55.370651 1 tasks_processing.go:69] worker 37 listening for tasks. I0123 13:44:55.370683 1 tasks_processing.go:69] worker 26 listening for tasks. I0123 13:44:55.370693 1 tasks_processing.go:71] worker 3 working on metrics task. I0123 13:44:55.370653 1 tasks_processing.go:71] worker 41 working on sap_config task. I0123 13:44:55.370698 1 tasks_processing.go:69] worker 55 listening for tasks. I0123 13:44:55.370673 1 tasks_processing.go:69] worker 23 listening for tasks. I0123 13:44:55.370695 1 tasks_processing.go:71] worker 7 working on support_secret task. I0123 13:44:55.370707 1 tasks_processing.go:69] worker 46 listening for tasks. I0123 13:44:55.370714 1 tasks_processing.go:69] worker 47 listening for tasks. I0123 13:44:55.370718 1 tasks_processing.go:71] worker 47 working on authentication task. I0123 13:44:55.370723 1 tasks_processing.go:71] worker 46 working on proxies task. I0123 13:44:55.370758 1 tasks_processing.go:69] worker 60 listening for tasks. I0123 13:44:55.370646 1 tasks_processing.go:69] worker 32 listening for tasks. I0123 13:44:55.370636 1 tasks_processing.go:69] worker 30 listening for tasks. I0123 13:44:55.370677 1 tasks_processing.go:69] worker 44 listening for tasks. I0123 13:44:55.370770 1 tasks_processing.go:69] worker 61 listening for tasks. I0123 13:44:55.370771 1 tasks_processing.go:69] worker 58 listening for tasks. I0123 13:44:55.370771 1 tasks_processing.go:69] worker 56 listening for tasks. I0123 13:44:55.370684 1 tasks_processing.go:69] worker 45 listening for tasks. I0123 13:44:55.370775 1 tasks_processing.go:69] worker 50 listening for tasks. I0123 13:44:55.370779 1 tasks_processing.go:69] worker 62 listening for tasks. I0123 13:44:55.370690 1 tasks_processing.go:71] worker 0 working on scheduler_logs task. I0123 13:44:55.370782 1 tasks_processing.go:69] worker 63 listening for tasks. I0123 13:44:55.370781 1 tasks_processing.go:69] worker 48 listening for tasks. I0123 13:44:55.370785 1 tasks_processing.go:71] worker 4 working on feature_gates task. I0123 13:44:55.370785 1 tasks_processing.go:71] worker 2 working on cost_management_metrics_configs task. I0123 13:44:55.370790 1 tasks_processing.go:71] worker 43 working on container_images task. I0123 13:44:55.370791 1 tasks_processing.go:69] worker 52 listening for tasks. I0123 13:44:55.370792 1 tasks_processing.go:71] worker 44 working on machine_healthchecks task. I0123 13:44:55.370791 1 tasks_processing.go:71] worker 37 working on image_registries task. I0123 13:44:55.370791 1 tasks_processing.go:71] worker 48 working on qemu_kubevirt_launcher_logs task. I0123 13:44:55.370791 1 tasks_processing.go:71] worker 26 working on image_pruners task. I0123 13:44:55.370801 1 tasks_processing.go:71] worker 58 working on aggregated_monitoring_cr_names task. I0123 13:44:55.370801 1 tasks_processing.go:71] worker 34 working on container_runtime_configs task. I0123 13:44:55.370667 1 tasks_processing.go:69] worker 22 listening for tasks. I0123 13:44:55.370820 1 tasks_processing.go:71] worker 22 working on mutating_webhook_configurations task. I0123 13:44:55.370854 1 tasks_processing.go:71] worker 56 working on tsdb_status task. I0123 13:44:55.370768 1 tasks_processing.go:69] worker 51 listening for tasks. I0123 13:44:55.370952 1 tasks_processing.go:71] worker 51 working on nodenetworkstates task. I0123 13:44:55.370790 1 tasks_processing.go:71] worker 38 working on install_plans task. I0123 13:44:55.370774 1 tasks_processing.go:69] worker 49 listening for tasks. I0123 13:44:55.371002 1 tasks_processing.go:71] worker 49 working on openshift_apiserver_operator_logs task. I0123 13:44:55.370777 1 tasks_processing.go:71] worker 11 working on olm_operators task. I0123 13:44:55.370781 1 tasks_processing.go:71] worker 1 working on sap_license_management_logs task. I0123 13:44:55.370783 1 tasks_processing.go:69] worker 59 listening for tasks. I0123 13:44:55.371162 1 tasks_processing.go:71] worker 59 working on operators task. I0123 13:44:55.370778 1 tasks_processing.go:69] worker 57 listening for tasks. I0123 13:44:55.370786 1 tasks_processing.go:71] worker 62 working on active_alerts task. I0123 13:44:55.370639 1 tasks_processing.go:71] worker 29 working on crds task. I0123 13:44:55.370787 1 tasks_processing.go:71] worker 63 working on openstack_controlplanes task. I0123 13:44:55.371335 1 tasks_processing.go:71] worker 57 working on dvo_metrics task. I0123 13:44:55.370796 1 tasks_processing.go:71] worker 52 working on sap_datahubs task. I0123 13:44:55.370796 1 tasks_processing.go:71] worker 39 working on number_of_pods_and_netnamespaces_with_sdn_annotations task. I0123 13:44:55.370798 1 tasks_processing.go:71] worker 61 working on storage_cluster task. I0123 13:44:55.370822 1 tasks_processing.go:71] worker 10 working on validating_webhook_configurations task. I0123 13:44:55.370823 1 tasks_processing.go:69] worker 53 listening for tasks. I0123 13:44:55.370823 1 tasks_processing.go:71] worker 55 working on overlapping_namespace_uids task. I0123 13:44:55.371644 1 tasks_processing.go:71] worker 53 working on schedulers task. I0123 13:44:55.370826 1 tasks_processing.go:71] worker 5 working on sap_pods task. I0123 13:44:55.370827 1 tasks_processing.go:71] worker 23 working on infrastructures task. I0123 13:44:55.370829 1 tasks_processing.go:69] worker 54 listening for tasks. I0123 13:44:55.371801 1 tasks_processing.go:71] worker 54 working on operators_pods_and_events task. I0123 13:44:55.370830 1 tasks_processing.go:71] worker 8 working on openshift_authentication_logs task. I0123 13:44:55.370829 1 tasks_processing.go:71] worker 24 working on nodes task. I0123 13:44:55.370834 1 tasks_processing.go:71] worker 6 working on openstack_version task. I0123 13:44:55.370834 1 tasks_processing.go:71] worker 16 working on cluster_apiserver task. I0123 13:44:55.370833 1 tasks_processing.go:71] worker 36 working on kube_controller_manager_logs task. I0123 13:44:55.370834 1 tasks_processing.go:71] worker 60 working on image task. I0123 13:44:55.370833 1 tasks_processing.go:71] worker 45 working on networks task. I0123 13:44:55.370644 1 tasks_processing.go:71] worker 19 working on openstack_dataplanenodesets task. I0123 13:44:55.370834 1 tasks_processing.go:71] worker 42 working on silenced_alerts task. I0123 13:44:55.370838 1 tasks_processing.go:71] worker 12 working on pdbs task. I0123 13:44:55.370838 1 tasks_processing.go:71] worker 9 working on ingress task. I0123 13:44:55.370840 1 tasks_processing.go:71] worker 28 working on pod_network_connectivity_checks task. I0123 13:44:55.370839 1 tasks_processing.go:71] worker 32 working on version task. I0123 13:44:55.370840 1 tasks_processing.go:71] worker 30 working on config_maps task. I0123 13:44:55.370840 1 tasks_processing.go:71] worker 13 working on machine_config_pools task. I0123 13:44:55.370642 1 tasks_processing.go:69] worker 35 listening for tasks. I0123 13:44:55.370843 1 tasks_processing.go:71] worker 20 working on storage_classes task. I0123 13:44:55.373484 1 tasks_processing.go:71] worker 35 working on machine_autoscalers task. I0123 13:44:55.370844 1 tasks_processing.go:71] worker 17 working on lokistack task. I0123 13:44:55.370844 1 tasks_processing.go:71] worker 27 working on ingress_certificates task. I0123 13:44:55.370845 1 tasks_processing.go:71] worker 50 working on nodenetworkconfigurationpolicies task. I0123 13:44:55.370846 1 tasks_processing.go:71] worker 40 working on jaegers task. I0123 13:44:55.370847 1 tasks_processing.go:71] worker 15 working on machines task. I0123 13:44:55.374434 1 tasks_processing.go:71] worker 25 working on openshift_machine_api_events task. I0123 13:44:55.374456 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanedeployments" took 3.772841ms to process 0 records E0123 13:44:55.375880 1 gather_prometheus_tsdb_status.go:49] Unable to tsdb status: Get "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/status/tsdb": dial tcp: lookup prometheus-k8s.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:41649->172.30.0.10:53: read: connection refused I0123 13:44:55.375901 1 tasks_processing.go:71] worker 56 working on node_logs task. E0123 13:44:55.375893 1 gather_most_recent_metrics.go:87] Unable to retrieve most recent metrics: Get "https://prometheus-k8s.openshift-monitoring.svc:9091/federate?match%5B%5D=cluster_installer&match%5B%5D=namespace%3Acontainer_cpu_usage%3Asum&match%5B%5D=namespace%3Acontainer_memory_usage_bytes%3Asum&match%5B%5D=vsphere_node_hw_version_total&match%5B%5D=virt_platform&match%5B%5D=console_helm_installs_total&match%5B%5D=console_helm_upgrades_total&match%5B%5D=console_helm_uninstalls_total&match%5B%5D=openshift_apps_deploymentconfigs_strategy_total&match%5B%5D=etcd_server_slow_apply_total&match%5B%5D=etcd_server_slow_read_indexes_total&match%5B%5D=haproxy_exporter_server_threshold": dial tcp: lookup prometheus-k8s.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:41649->172.30.0.10:53: read: connection refused E0123 13:44:55.375908 1 gather.go:143] gatherer "clusterconfig" function "tsdb_status" failed with the error: Get "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/status/tsdb": dial tcp: lookup prometheus-k8s.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:41649->172.30.0.10:53: read: connection refused I0123 13:44:55.375931 1 gather.go:180] gatherer "clusterconfig" function "tsdb_status" took 5.027413ms to process 0 records E0123 13:44:55.375975 1 gather.go:143] gatherer "clusterconfig" function "metrics" failed with the error: Get "https://prometheus-k8s.openshift-monitoring.svc:9091/federate?match%5B%5D=cluster_installer&match%5B%5D=namespace%3Acontainer_cpu_usage%3Asum&match%5B%5D=namespace%3Acontainer_memory_usage_bytes%3Asum&match%5B%5D=vsphere_node_hw_version_total&match%5B%5D=virt_platform&match%5B%5D=console_helm_installs_total&match%5B%5D=console_helm_upgrades_total&match%5B%5D=console_helm_uninstalls_total&match%5B%5D=openshift_apps_deploymentconfigs_strategy_total&match%5B%5D=etcd_server_slow_apply_total&match%5B%5D=etcd_server_slow_read_indexes_total&match%5B%5D=haproxy_exporter_server_threshold": dial tcp: lookup prometheus-k8s.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:41649->172.30.0.10:53: read: connection refused I0123 13:44:55.375997 1 gather.go:180] gatherer "clusterconfig" function "metrics" took 5.211522ms to process 0 records I0123 13:44:55.376014 1 tasks_processing.go:71] worker 3 working on clusterroles task. I0123 13:44:55.376032 1 gather.go:180] gatherer "clusterconfig" function "ceph_cluster" took 5.373896ms to process 0 records I0123 13:44:55.376046 1 tasks_processing.go:71] worker 21 working on machine_sets task. E0123 13:44:55.376228 1 gather_active_alerts.go:64] Unable to retrieve most recent alerts: Get "https://alertmanager-main.openshift-monitoring.svc:9094/api/v2/alerts?active=true": dial tcp: lookup alertmanager-main.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:53173->172.30.0.10:53: read: connection refused E0123 13:44:55.376232 1 gather_silenced_alerts.go:52] Unable to retrieve silenced alerts: Get "https://alertmanager-main.openshift-monitoring.svc:9094/api/v2/alerts?active=false&inhibited=false&silenced=true": dial tcp: lookup alertmanager-main.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:53173->172.30.0.10:53: read: connection refused I0123 13:44:55.376242 1 tasks_processing.go:71] worker 62 working on service_accounts task. E0123 13:44:55.376247 1 gather.go:143] gatherer "clusterconfig" function "active_alerts" failed with the error: Get "https://alertmanager-main.openshift-monitoring.svc:9094/api/v2/alerts?active=true": dial tcp: lookup alertmanager-main.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:53173->172.30.0.10:53: read: connection refused I0123 13:44:55.376262 1 gather.go:180] gatherer "clusterconfig" function "active_alerts" took 5.04045ms to process 0 records E0123 13:44:55.376270 1 gather.go:143] gatherer "clusterconfig" function "silenced_alerts" failed with the error: Get "https://alertmanager-main.openshift-monitoring.svc:9094/api/v2/alerts?active=false&inhibited=false&silenced=true": dial tcp: lookup alertmanager-main.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:53173->172.30.0.10:53: read: connection refused I0123 13:44:55.376279 1 gather.go:180] gatherer "clusterconfig" function "silenced_alerts" took 3.725769ms to process 0 records I0123 13:44:55.376327 1 tasks_processing.go:71] worker 42 working on machine_configs task. I0123 13:44:55.376803 1 controller.go:119] Initializing last reported time to 0001-01-01T00:00:00Z I0123 13:44:55.376817 1 controller.go:317] The initial operator extension status is healthy I0123 13:44:55.376825 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0123 13:44:55.376832 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0123 13:44:55.376834 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0123 13:44:55.376847 1 controller.go:457] The operator is still being initialized I0123 13:44:55.376853 1 controller.go:482] The operator is healthy I0123 13:44:55.376918 1 sca.go:98] Pulling SCA certificates from https://api.openshift.com/api/accounts_mgmt/v1/certificates. Next check is in 8h0m0s I0123 13:44:55.376998 1 cluster_transfer.go:78] checking the availability of cluster transfer. Next check is in 12h0m0s W0123 13:44:55.377014 1 operator.go:286] started I0123 13:44:55.377034 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer I0123 13:44:55.379590 1 tasks_processing.go:74] worker 34 stopped. I0123 13:44:55.379606 1 gather.go:180] gatherer "clusterconfig" function "container_runtime_configs" took 8.778287ms to process 0 records I0123 13:44:55.380787 1 tasks_processing.go:74] worker 63 stopped. I0123 13:44:55.380802 1 gather.go:180] gatherer "clusterconfig" function "openstack_controlplanes" took 9.48973ms to process 0 records I0123 13:44:55.381854 1 gather_logs.go:145] no pods in openshift-kube-scheduler namespace were found I0123 13:44:55.381869 1 tasks_processing.go:74] worker 0 stopped. I0123 13:44:55.381875 1 gather.go:180] gatherer "clusterconfig" function "scheduler_logs" took 11.079495ms to process 0 records I0123 13:44:55.383089 1 gather_logs.go:145] no pods in openshift-apiserver-operator namespace were found I0123 13:44:55.383103 1 tasks_processing.go:74] worker 49 stopped. I0123 13:44:55.383112 1 gather.go:180] gatherer "clusterconfig" function "openshift_apiserver_operator_logs" took 12.09464ms to process 0 records I0123 13:44:55.383174 1 tasks_processing.go:74] worker 22 stopped. I0123 13:44:55.383407 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/aws-pod-identity with fingerprint=9b61e943cae8cb76b76a675113ff2027612324c501ea30a1917477bdf4a5c4e5 I0123 13:44:55.383435 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-podimagespec-mutation with fingerprint=f83d402e48f74da255266954cc66a612f002668dc8413a158fc7a9cde16fbd7a I0123 13:44:55.383451 1 recorder.go:75] Recording config/mutatingwebhookconfigurations/sre-service-mutation with fingerprint=5be2e9e69e2b2d5e18ab46b5c186541d7efa265df4b5f0e5fc63a2c7ad958b18 I0123 13:44:55.383458 1 gather.go:180] gatherer "clusterconfig" function "mutating_webhook_configurations" took 12.302081ms to process 3 records I0123 13:44:55.383540 1 tasks_processing.go:74] worker 46 stopped. I0123 13:44:55.383638 1 recorder.go:75] Recording config/proxy with fingerprint=1288a6eca8718b082f0c94dccccd721ecbecb689fd479693afd06e44b95f6f70 I0123 13:44:55.383652 1 gather.go:180] gatherer "clusterconfig" function "proxies" took 12.7984ms to process 1 records I0123 13:44:55.384426 1 tasks_processing.go:74] worker 7 stopped. E0123 13:44:55.384437 1 gather.go:143] gatherer "clusterconfig" function "support_secret" failed with the error: secrets "support" not found I0123 13:44:55.384444 1 gather.go:180] gatherer "clusterconfig" function "support_secret" took 13.713674ms to process 0 records I0123 13:44:55.384485 1 tasks_processing.go:74] worker 61 stopped. I0123 13:44:55.384498 1 gather.go:180] gatherer "clusterconfig" function "storage_cluster" took 12.935786ms to process 0 records I0123 13:44:55.384524 1 tasks_processing.go:74] worker 44 stopped. E0123 13:44:55.384538 1 gather.go:143] gatherer "clusterconfig" function "machine_healthchecks" failed with the error: machinehealthchecks.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machinehealthchecks" in API group "machine.openshift.io" at the cluster scope I0123 13:44:55.384554 1 gather.go:180] gatherer "clusterconfig" function "machine_healthchecks" took 13.721414ms to process 0 records I0123 13:44:55.384584 1 tasks_processing.go:74] worker 14 stopped. I0123 13:44:55.384600 1 gather.go:180] gatherer "clusterconfig" function "certificate_signing_requests" took 13.90374ms to process 0 records I0123 13:44:55.385116 1 tasks_processing.go:74] worker 41 stopped. I0123 13:44:55.385130 1 gather.go:180] gatherer "clusterconfig" function "sap_config" took 14.411352ms to process 0 records I0123 13:44:55.385360 1 tasks_processing.go:74] worker 5 stopped. I0123 13:44:55.385368 1 gather.go:180] gatherer "clusterconfig" function "sap_pods" took 13.60843ms to process 0 records I0123 13:44:55.386125 1 tasks_processing.go:74] worker 37 stopped. I0123 13:44:55.386530 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/config/cluster with fingerprint=32980011eb4b0cb4bc9d5cba38daef567e73e3a08e7bfec9277ef83ff98ffb8b I0123 13:44:55.386543 1 gather.go:180] gatherer "clusterconfig" function "image_registries" took 15.319312ms to process 1 records I0123 13:44:55.386550 1 gather.go:180] gatherer "clusterconfig" function "openstack_dataplanenodesets" took 13.663394ms to process 0 records I0123 13:44:55.386554 1 tasks_processing.go:74] worker 19 stopped. I0123 13:44:55.387300 1 tasks_processing.go:74] worker 33 stopped. I0123 13:44:55.387505 1 recorder.go:75] Recording config/oauth with fingerprint=a9cfadc045028e4390559e49c3030cef5391e8d5e20d285266b6c7236ebd2c17 I0123 13:44:55.387519 1 gather.go:180] gatherer "clusterconfig" function "oauths" took 16.63543ms to process 1 records I0123 13:44:55.388621 1 tasks_processing.go:74] worker 10 stopped. I0123 13:44:55.388696 1 recorder.go:75] Recording config/validatingwebhookconfigurations/alertmanagerconfigs.openshift.io with fingerprint=fdf2eaba4c4e190d122eac0b7123e1e93fb247782a1abb1dcd070bd0bfdaa0d4 I0123 13:44:55.388726 1 recorder.go:75] Recording config/validatingwebhookconfigurations/multus.openshift.io with fingerprint=f95b172ba0b96af94b5ad8aed5f6fb69082d239b4db8dacb290557a8bb444553 I0123 13:44:55.388781 1 recorder.go:75] Recording config/validatingwebhookconfigurations/network-node-identity.openshift.io with fingerprint=9ecff7d335c44be4f51426a8b57aae5d723c61704abda866b14aa38d704f99f1 I0123 13:44:55.388800 1 recorder.go:75] Recording config/validatingwebhookconfigurations/performance-addon-operator with fingerprint=8227c0cc8672ae49ffe7383d33f9a7d432769dd0dcf4c06dcdd4a4b6856c1476 I0123 13:44:55.388812 1 recorder.go:75] Recording config/validatingwebhookconfigurations/prometheusrules.openshift.io with fingerprint=97e99de6748f574c6e2373bf30d489efa4949eaf93b33a516184267a0684b079 I0123 13:44:55.388828 1 recorder.go:75] Recording config/validatingwebhookconfigurations/snapshot.storage.k8s.io with fingerprint=64ed5546e9ed5e10253a6ace93e23edac53f5da7b8ecae58af8ba74d4f36a757 I0123 13:44:55.388843 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterrolebindings-validation with fingerprint=892902afcfc3f30411e06305331e20adfd3d07a0e0b612743959c469b6ba3887 I0123 13:44:55.388856 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-clusterroles-validation with fingerprint=00de04a7bebf625334a9154a06b9ed9bc6276530a9e822881a92dbaa34237ef2 I0123 13:44:55.388879 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-ingress-config-validation with fingerprint=299bb861af066d6ae30e3feaf6a3528fb2174ed73d8324d57728345f46e748e1 I0123 13:44:55.388899 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-regular-user-validation with fingerprint=d4257ee965b5c48a988a2ad05c8b642eff731937724a15c52523206782ac1009 I0123 13:44:55.388914 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-scc-validation with fingerprint=efe53b4c4b3478279d57496e7df87fc41d32816e78a777f2d8ccfb726fcac242 I0123 13:44:55.388927 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-serviceaccount-validation with fingerprint=9b5547cbb22345574620650d7c7bd4397ae65f07556bc51298e0b84ea739603e I0123 13:44:55.388944 1 recorder.go:75] Recording config/validatingwebhookconfigurations/sre-techpreviewnoupgrade-validation with fingerprint=c290df24f4c9c2e864b8df3368bfac20db73e3d1988eaf01258faadfb9101987 I0123 13:44:55.388951 1 gather.go:180] gatherer "clusterconfig" function "validating_webhook_configurations" took 16.992705ms to process 13 records I0123 13:44:55.389149 1 gather_sap_vsystem_iptables_logs.go:60] SAP resources weren't found I0123 13:44:55.389161 1 tasks_processing.go:74] worker 1 stopped. I0123 13:44:55.389166 1 gather.go:180] gatherer "clusterconfig" function "sap_license_management_logs" took 18.027768ms to process 0 records I0123 13:44:55.389363 1 tasks_processing.go:74] worker 6 stopped. I0123 13:44:55.389371 1 gather.go:180] gatherer "clusterconfig" function "openstack_version" took 17.379388ms to process 0 records I0123 13:44:55.389626 1 tasks_processing.go:74] worker 52 stopped. I0123 13:44:55.389638 1 gather.go:180] gatherer "clusterconfig" function "sap_datahubs" took 18.217987ms to process 0 records I0123 13:44:55.389712 1 tasks_processing.go:74] worker 31 stopped. I0123 13:44:55.389722 1 gather.go:180] gatherer "clusterconfig" function "openshift_logging" took 19.021191ms to process 0 records I0123 13:44:55.389818 1 recorder.go:75] Recording config/clusteroperator/imageregistry.operator.openshift.io/imagepruner/cluster with fingerprint=d92af006d678997d52a7c323dcfdfc1998572bb8ccc9f2d16a6320d6dab6b872 I0123 13:44:55.389826 1 tasks_processing.go:74] worker 26 stopped. I0123 13:44:55.389831 1 gather.go:180] gatherer "clusterconfig" function "image_pruners" took 18.917894ms to process 1 records I0123 13:44:55.390176 1 tasks_processing.go:74] worker 4 stopped. I0123 13:44:55.390293 1 recorder.go:75] Recording config/featuregate with fingerprint=a173f70fef66c9cd84c2b2e7e190b4896152fed562c8c1b35df02cad5982017d I0123 13:44:55.390304 1 gather.go:180] gatherer "clusterconfig" function "feature_gates" took 19.381168ms to process 1 records I0123 13:44:55.392501 1 gather_logs.go:145] no pods in openshift-authentication namespace were found I0123 13:44:55.392516 1 tasks_processing.go:74] worker 8 stopped. I0123 13:44:55.392524 1 gather.go:180] gatherer "clusterconfig" function "openshift_authentication_logs" took 20.700955ms to process 0 records I0123 13:44:55.392653 1 tasks_processing.go:74] worker 45 stopped. I0123 13:44:55.392776 1 recorder.go:75] Recording config/network with fingerprint=0757cfd76144ff240a43ebb7ded395dfa3ac5170a0a27bb97703ff9ecaae48a8 I0123 13:44:55.392786 1 gather.go:180] gatherer "clusterconfig" function "networks" took 20.179175ms to process 1 records I0123 13:44:55.394376 1 tasks_processing.go:74] worker 13 stopped. I0123 13:44:55.394389 1 gather.go:180] gatherer "clusterconfig" function "machine_config_pools" took 20.89966ms to process 0 records I0123 13:44:55.394394 1 gather.go:180] gatherer "clusterconfig" function "cost_management_metrics_configs" took 23.579906ms to process 0 records I0123 13:44:55.394398 1 gather.go:180] gatherer "clusterconfig" function "machine_autoscalers" took 20.881194ms to process 0 records E0123 13:44:55.394402 1 gather.go:143] gatherer "clusterconfig" function "pod_network_connectivity_checks" failed with the error: the server could not find the requested resource (get podnetworkconnectivitychecks.controlplane.operator.openshift.io) I0123 13:44:55.394407 1 gather.go:180] gatherer "clusterconfig" function "pod_network_connectivity_checks" took 21.073459ms to process 0 records I0123 13:44:55.394410 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkstates" took 23.431716ms to process 0 records I0123 13:44:55.394414 1 tasks_processing.go:74] worker 51 stopped. I0123 13:44:55.394416 1 tasks_processing.go:74] worker 2 stopped. I0123 13:44:55.394419 1 tasks_processing.go:74] worker 35 stopped. I0123 13:44:55.394421 1 tasks_processing.go:74] worker 28 stopped. I0123 13:44:55.396338 1 tasks_processing.go:74] worker 23 stopped. I0123 13:44:55.396708 1 recorder.go:75] Recording config/infrastructure with fingerprint=6ecef85d82a4965a4475cb06f3457b3f103677175fa37a04c4cd654f468c4a39 I0123 13:44:55.396720 1 gather.go:180] gatherer "clusterconfig" function "infrastructures" took 24.55896ms to process 1 records I0123 13:44:55.398281 1 gather_logs.go:145] no pods in openshift-kube-controller-manager namespace were found I0123 13:44:55.398294 1 tasks_processing.go:74] worker 36 stopped. I0123 13:44:55.398302 1 gather.go:180] gatherer "clusterconfig" function "kube_controller_manager_logs" took 26.22593ms to process 0 records W0123 13:44:55.399887 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0123 13:44:55.401115 1 tasks_processing.go:74] worker 21 stopped. I0123 13:44:55.401132 1 gather.go:180] gatherer "clusterconfig" function "machine_sets" took 25.059721ms to process 0 records I0123 13:44:55.401223 1 tasks_processing.go:74] worker 60 stopped. I0123 13:44:55.401336 1 recorder.go:75] Recording config/image with fingerprint=987868b37425688098c82d687fd2c2223ede7ae72a032ebb0bb5eff9fb13293b I0123 13:44:55.401350 1 gather.go:180] gatherer "clusterconfig" function "image" took 28.95337ms to process 1 records I0123 13:44:55.403043 1 tasks_processing.go:74] worker 20 stopped. I0123 13:44:55.403106 1 recorder.go:75] Recording config/storage/storageclasses/gp2-csi with fingerprint=423c74b3ef026bfadd38aa298ba84667b5560689e74798702e12f2bc9a4e09ce I0123 13:44:55.403124 1 recorder.go:75] Recording config/storage/storageclasses/gp3-csi with fingerprint=4f417e56d3a8a031718379b7473c056cbc70437a4d339a719f110ce5b8c1be8b I0123 13:44:55.403133 1 gather.go:180] gatherer "clusterconfig" function "storage_classes" took 29.546663ms to process 2 records I0123 13:44:55.404039 1 tasks_processing.go:74] worker 50 stopped. I0123 13:44:55.404051 1 gather.go:180] gatherer "clusterconfig" function "nodenetworkconfigurationpolicies" took 30.428059ms to process 0 records I0123 13:44:55.404059 1 gather.go:180] gatherer "clusterconfig" function "jaegers" took 30.38834ms to process 0 records I0123 13:44:55.404063 1 gather.go:180] gatherer "clusterconfig" function "lokistack" took 30.478298ms to process 0 records I0123 13:44:55.404070 1 gather.go:180] gatherer "clusterconfig" function "machine_configs" took 27.734861ms to process 0 records I0123 13:44:55.404073 1 tasks_processing.go:74] worker 17 stopped. E0123 13:44:55.404074 1 gather.go:143] gatherer "clusterconfig" function "machines" failed with the error: machines.machine.openshift.io is forbidden: User "system:serviceaccount:openshift-insights:gather" cannot list resource "machines" in API group "machine.openshift.io" at the cluster scope I0123 13:44:55.404073 1 tasks_processing.go:74] worker 40 stopped. I0123 13:44:55.404079 1 tasks_processing.go:74] worker 42 stopped. I0123 13:44:55.404081 1 gather.go:180] gatherer "clusterconfig" function "machines" took 30.404026ms to process 0 records I0123 13:44:55.404086 1 tasks_processing.go:74] worker 15 stopped. I0123 13:44:55.410943 1 tasks_processing.go:74] worker 56 stopped. I0123 13:44:55.410960 1 gather.go:180] gatherer "clusterconfig" function "node_logs" took 35.028783ms to process 0 records I0123 13:44:55.411469 1 tasks_processing.go:74] worker 53 stopped. I0123 13:44:55.411541 1 recorder.go:75] Recording config/schedulers/cluster with fingerprint=a32bbfc1c9e291a1c3d1033b69ac4ad2adcfd078213959adaa83fca135c60746 I0123 13:44:55.411565 1 gather.go:180] gatherer "clusterconfig" function "schedulers" took 39.793686ms to process 1 records I0123 13:44:55.411575 1 gather.go:180] gatherer "clusterconfig" function "openshift_machine_api_events" took 37.027048ms to process 0 records I0123 13:44:55.411582 1 tasks_processing.go:74] worker 25 stopped. I0123 13:44:55.411622 1 tasks_processing.go:74] worker 9 stopped. I0123 13:44:55.411858 1 recorder.go:75] Recording config/ingress with fingerprint=25c3f93da17c295eb6569fa75730dd81bc65897842b5fbb7b237c880ef877fac I0123 13:44:55.411874 1 gather.go:180] gatherer "clusterconfig" function "ingress" took 38.364576ms to process 1 records I0123 13:44:55.411957 1 tasks_processing.go:74] worker 16 stopped. I0123 13:44:55.411980 1 recorder.go:75] Recording config/apiserver with fingerprint=f14ea09082dbaff9b14019aa6ff5ca22df97c859372d182994622e67977824a3 I0123 13:44:55.411989 1 gather.go:180] gatherer "clusterconfig" function "cluster_apiserver" took 39.684892ms to process 1 records I0123 13:44:55.412002 1 recorder.go:75] Recording config/namespaces_with_overlapping_uids with fingerprint=4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945 I0123 13:44:55.412008 1 gather.go:180] gatherer "clusterconfig" function "overlapping_namespace_uids" took 40.165085ms to process 1 records I0123 13:44:55.412013 1 tasks_processing.go:74] worker 55 stopped. I0123 13:44:55.413412 1 tasks_processing.go:74] worker 24 stopped. I0123 13:44:55.413831 1 recorder.go:75] Recording config/node/ip-10-0-135-36.ec2.internal with fingerprint=a8ebd0c2261bde753f8f8f73300dfaf19933e2714e7706a1bf26bc5071de1571 I0123 13:44:55.414188 1 recorder.go:75] Recording config/node/ip-10-0-154-74.ec2.internal with fingerprint=5fbd1155abb863b40de8556c6649382b9046cad1b7609af7f6ac989aed65be25 I0123 13:44:55.414389 1 recorder.go:75] Recording config/node/ip-10-0-175-165.ec2.internal with fingerprint=a7e8df9b900a6bf7e5efd1b8f62b53da49db3636db653ca6dc1c062befab5551 I0123 13:44:55.414427 1 gather.go:180] gatherer "clusterconfig" function "nodes" took 41.542079ms to process 3 records I0123 13:44:55.414520 1 tasks_processing.go:74] worker 47 stopped. I0123 13:44:55.414946 1 recorder.go:75] Recording config/authentication with fingerprint=3f2ee1d61dd142da983e4e4aae92cd7d8dd6e257b6ee01b239bf35f60a618a0c I0123 13:44:55.414989 1 gather.go:180] gatherer "clusterconfig" function "authentication" took 43.596838ms to process 1 records I0123 13:44:55.415133 1 recorder.go:75] Recording config/pdbs/openshift-console/console with fingerprint=419042b9e4a6d95679c1d6e4724334bd180b11a1aae1e7cd5449a776ce3f01cd I0123 13:44:55.415171 1 recorder.go:75] Recording config/pdbs/openshift-console/downloads with fingerprint=bf188c3849230fc7d9ab113ab3e8e988dfae67a567e5e04ea038505e0323f307 I0123 13:44:55.415208 1 recorder.go:75] Recording config/pdbs/openshift-image-registry/image-registry with fingerprint=c80d370b1664153152806d3de4c5eac0f0ab5c3e3fc5a16dd8fbb62ddb7319f8 I0123 13:44:55.415239 1 recorder.go:75] Recording config/pdbs/openshift-ingress/router-default with fingerprint=df231257b2c86324ff2add940ca3c84f2dc73b3f7610df5a9c2ff9c6c4ff13fd I0123 13:44:55.415278 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/alertmanager-main with fingerprint=f09fb42c85ae6a40d209e57e87b738d83afef7e0af82dbaced135832375cc591 I0123 13:44:55.415311 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/metrics-server with fingerprint=c5c36b40b835a03c008fb9702163d3747dd5b8737e40b52a99a9f20ea735876f I0123 13:44:55.415351 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/monitoring-plugin with fingerprint=95327ad157d382df725886b1c9c2c8e038e282d0fcd7eb7a6e432a5358c8e473 I0123 13:44:55.415323 1 tasks_processing.go:74] worker 12 stopped. I0123 13:44:55.415417 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/prometheus-k8s with fingerprint=f3561c93d5859af3d05bfb0c9783913365aadeefb910b5ade1056e90a91dbe96 I0123 13:44:55.415578 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/prometheus-operator-admission-webhook with fingerprint=f8eb325f16db1f37f9d187a20ee490e5961980536e3cf0337714f03c6fbf8e27 I0123 13:44:55.415610 1 recorder.go:75] Recording config/pdbs/openshift-monitoring/thanos-querier-pdb with fingerprint=7188e7f79f7ebb62ae93678a1fba1930ee6e45cd8d605ab230d5a52246a25214 I0123 13:44:55.415625 1 recorder.go:75] Recording config/pdbs/openshift-operator-lifecycle-manager/packageserver-pdb with fingerprint=33ee27aa8264cc6dc0745b9ce67898b27f9c902023258366ac60f6e25364f1d3 I0123 13:44:55.415637 1 recorder.go:75] Recording config/pdbs/openshift-user-workload-monitoring/thanos-ruler-user-workload with fingerprint=9c821a27115d369592d2d4288c77999f95c05bb0e5e7636086280d49d8bfa286 I0123 13:44:55.415648 1 gather.go:180] gatherer "clusterconfig" function "pdbs" took 41.493376ms to process 12 records I0123 13:44:55.417369 1 tasks_processing.go:74] worker 58 stopped. I0123 13:44:55.417386 1 gather.go:180] gatherer "clusterconfig" function "aggregated_monitoring_cr_names" took 46.552695ms to process 0 records I0123 13:44:55.417829 1 requests.go:204] Asking for SCA certificate for x86_64 architecture I0123 13:44:55.419154 1 controller.go:203] Source clusterTransferController *clustertransfer.Controller is not ready I0123 13:44:55.419163 1 controller.go:203] Source periodic-clusterconfig *controllerstatus.Simple is not ready I0123 13:44:55.419166 1 controller.go:203] Source periodic-conditional *controllerstatus.Simple is not ready I0123 13:44:55.419168 1 controller.go:203] Source periodic-workloads *controllerstatus.Simple is not ready I0123 13:44:55.419172 1 controller.go:203] Source scaController *sca.Controller is not ready I0123 13:44:55.419187 1 controller.go:457] The operator is still being initialized I0123 13:44:55.419191 1 controller.go:482] The operator is healthy E0123 13:44:55.420704 1 cluster_transfer.go:90] failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%!e(MISSING)9b6cd5b-5da5-421d-ac5e-9d51bdc06ed6%!+(MISSING)and+status+is+%!a(MISSING)ccepted%!"(MISSING): dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.8:37357->172.30.0.10:53: read: connection refused W0123 13:44:55.420713 1 sca.go:117] Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.8:37357->172.30.0.10:53: read: connection refused I0123 13:44:55.420717 1 controllerstatus.go:80] name=clusterTransferController healthy=true reason=Disconnected message=failed to pull cluster transfer: unable to retrieve cluster transfer data from https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/: Get "https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/?search=cluster_uuid+is+%27e9b6cd5b-5da5-421d-ac5e-9d51bdc06ed6%27+and+status+is+%27accepted%27": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.8:37357->172.30.0.10:53: read: connection refused I0123 13:44:55.420723 1 controllerstatus.go:80] name=scaController healthy=true reason=NonHTTPError message=Failed to pull SCA certs from https://api.openshift.com/api/accounts_mgmt/v1/certificates: unable to retrieve SCA certs data from https://api.openshift.com/api/accounts_mgmt/v1/certificates: Post "https://api.openshift.com/api/accounts_mgmt/v1/certificates": dial tcp: lookup api.openshift.com on 172.30.0.10:53: read udp 10.130.0.8:37357->172.30.0.10:53: read: connection refused I0123 13:44:55.421026 1 tasks_processing.go:74] worker 32 stopped. I0123 13:44:55.421218 1 recorder.go:75] Recording config/version with fingerprint=7c761f688fe7c10edc8ec3ae2c41f6df2630516097d1cb98943d51c179027996 I0123 13:44:55.421231 1 recorder.go:75] Recording config/id with fingerprint=0a4440cdb2e001a8aa93ddc6b20b241d70074d5cd9315dfe97846716aa3454d0 I0123 13:44:55.421236 1 gather.go:180] gatherer "clusterconfig" function "version" took 47.68506ms to process 2 records I0123 13:44:55.424257 1 tasks_processing.go:74] worker 11 stopped. I0123 13:44:55.424301 1 recorder.go:75] Recording config/olm_operators with fingerprint=76f3f2043f81ff1fbedad56b1535c971b9f0f5a76495c0ad15b9f7a33408f4a3 I0123 13:44:55.424314 1 gather.go:180] gatherer "clusterconfig" function "olm_operators" took 53.22542ms to process 1 records I0123 13:44:55.425449 1 gather_logs.go:145] no pods in namespace were found I0123 13:44:55.425467 1 tasks_processing.go:74] worker 48 stopped. I0123 13:44:55.425478 1 gather.go:180] gatherer "clusterconfig" function "qemu_kubevirt_launcher_logs" took 54.659916ms to process 0 records I0123 13:44:55.427611 1 tasks_processing.go:74] worker 3 stopped. I0123 13:44:55.427734 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/admin with fingerprint=11a15eb52613ee0d87c01ed541d09d6c60e9efc527006d9c8d0fec7a30c24f45 I0123 13:44:55.427797 1 recorder.go:75] Recording cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/edit with fingerprint=3f3b96a91c04abe25e1e0356218ecc53b0b58ecf02b9ad2a4d88ec2d4e7d29a6 I0123 13:44:55.427806 1 gather.go:180] gatherer "clusterconfig" function "clusterroles" took 51.58709ms to process 2 records I0123 13:44:55.429597 1 tasks_processing.go:74] worker 29 stopped. I0123 13:44:55.430191 1 recorder.go:75] Recording config/crd/volumesnapshots.snapshot.storage.k8s.io with fingerprint=339b42c9d8d142b71e5084cb1678d0be00620bbd2a28abae2d1589263f678cb2 I0123 13:44:55.430374 1 recorder.go:75] Recording config/crd/volumesnapshotcontents.snapshot.storage.k8s.io with fingerprint=b2151056365e0c349b2b96107eba28461089c66db4ac5af5fd4e491150052829 I0123 13:44:55.430390 1 gather.go:180] gatherer "clusterconfig" function "crds" took 58.361688ms to process 2 records I0123 13:44:55.432124 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController I0123 13:44:55.432124 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0123 13:44:55.432141 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0123 13:44:55.439128 1 tasks_processing.go:74] worker 18 stopped. I0123 13:44:55.439294 1 base_controller.go:73] Caches are synced for ConfigController I0123 13:44:55.439307 1 base_controller.go:110] Starting #1 worker of ConfigController controller ... I0123 13:44:55.439600 1 recorder.go:75] Recording config/persistentvolumes/pvc-eefdd42f-b60e-4b29-83ff-88c1188483a7 with fingerprint=9461df04ba15088b8b234a877d1b5a233c048c51658acefaeebf407d9bf0d227 I0123 13:44:55.439626 1 recorder.go:75] Recording config/persistentvolumes/pvc-45850a74-52f2-4efb-93d8-91658fca548b with fingerprint=c4bbcfdbd0dc391fc17cdbc0b79a9006fae83ab46f3f5ee1b0610a6c2e5b4143 I0123 13:44:55.439636 1 gather.go:180] gatherer "clusterconfig" function "monitoring_persistent_volumes" took 68.47078ms to process 2 records I0123 13:44:55.440830 1 tasks_processing.go:74] worker 30 stopped. E0123 13:44:55.440844 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "gateway-mode-config" not found E0123 13:44:55.441050 1 gather.go:143] gatherer "clusterconfig" function "config_maps" failed with the error: configmaps "insights-config" not found I0123 13:44:55.441102 1 recorder.go:75] Recording config/configmaps/openshift-config/kube-root-ca.crt/ca.crt with fingerprint=d476c7d3f5b104863f08f481b1264dcc68cc272ecefb0ecb709b18a6afab034d I0123 13:44:55.441151 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/invoker with fingerprint=76b482f683cd3ef9da02debac5b26080a5aeb06ff768ee5c21117514dff29d8a I0123 13:44:55.441184 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-install/version with fingerprint=e748ca3b23473e113bc7e7f2f28a2d9a080e7dd8d6f69e09485ad9325982628c I0123 13:44:55.441209 1 recorder.go:75] Recording config/configmaps/openshift-config/openshift-service-ca.crt/service-ca.crt with fingerprint=34cdf5d36d45a9e3ee7c2be31546e3d937ae4ebf144ac07b0210c06a1610a94d I0123 13:44:55.441243 1 recorder.go:75] Recording config/configmaps/openshift-config/rosa-brand-logo/rosa-brand-logo.svg with fingerprint=6ed8ca4dd7a8eee7249182bc006e9649ce84d76c551ddfaaa33e55d8c4cc1ed0 I0123 13:44:55.441262 1 recorder.go:75] Recording config/configmaps/openshift-monitoring/cluster-monitoring-config/config with fingerprint=72b84e73f6a0b3f5cb8631772d7c6cb22f2acbacc29630102c5616778ce3c0cd I0123 13:44:55.441281 1 recorder.go:75] Recording config/configmaps/kube-system/cluster-config-v1/install-config with fingerprint=ab3811c6b83fd7b8e920094cfa3080d1b4ee3c35ec4c8379437b21d27bd6608d I0123 13:44:55.441300 1 gather.go:180] gatherer "clusterconfig" function "config_maps" took 67.439945ms to process 7 records I0123 13:44:55.447478 1 configmapobserver.go:84] configmaps "insights-config" not found I0123 13:44:55.455349 1 tasks_processing.go:74] worker 27 stopped. E0123 13:44:55.455441 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret 'router-certs-default' in namespace 'openshift-ingress': secrets "router-certs-default" not found E0123 13:44:55.455971 1 gather.go:143] gatherer "clusterconfig" function "ingress_certificates" failed with the error: failed to get secret '2o0ihvo4m8evfms8vq22tnicmavghb2v-primary-cert-bundle-secret' in namespace 'openshift-ingress-operator': secrets "2o0ihvo4m8evfms8vq22tnicmavghb2v-primary-cert-bundle-secret" not found I0123 13:44:55.456232 1 recorder.go:75] Recording aggregated/ingress_controllers_certs with fingerprint=a0d934092c6050882e301d79993b381d86e5f3d8b8f7c6ee493d6d109b873769 I0123 13:44:55.456254 1 gather.go:180] gatherer "clusterconfig" function "ingress_certificates" took 81.757482ms to process 1 records I0123 13:44:55.463330 1 tasks_processing.go:74] worker 43 stopped. I0123 13:44:55.464328 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94 with fingerprint=f2c7b8778fe3e5854a64f8485696984d3b656aacf578883589c5742a92caf87c I0123 13:44:55.464390 1 recorder.go:75] Recording config/pod/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7b445dff48-5ps8f with fingerprint=9a4fe3bdc489295bf900cd8b9f767a3496a0cd01b62a55d1bd1f49ef815d6819 I0123 13:44:55.464445 1 recorder.go:75] Recording config/pod/openshift-monitoring/cluster-monitoring-operator-fff5b7666-xrjlp with fingerprint=09aff51dbf48bda101ff3dcb49ba92085639703f920ca81c291b9deb7f1f6998 I0123 13:44:55.464492 1 recorder.go:75] Recording config/running_containers with fingerprint=27f9157f2a56da6fba0b9f640700663768bd3ba3f6b99006b3eb53cf5a2754bf I0123 13:44:55.464500 1 gather.go:180] gatherer "clusterconfig" function "container_images" took 92.526405ms to process 4 records I0123 13:44:55.466428 1 tasks_processing.go:74] worker 39 stopped. I0123 13:44:55.466441 1 gather.go:180] gatherer "clusterconfig" function "number_of_pods_and_netnamespaces_with_sdn_annotations" took 94.945403ms to process 0 records I0123 13:44:55.477368 1 base_controller.go:73] Caches are synced for LoggingSyncer I0123 13:44:55.477386 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... I0123 13:44:55.488123 1 gather_cluster_operators.go:184] Unable to get dnsrecords.ingress.operator.openshift.io resource due to: dnsrecords.ingress.operator.openshift.io "default" not found W0123 13:44:56.398769 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0123 13:44:56.627026 1 gather_cluster_operator_pods_and_events.go:119] Found 24 pods with 27 containers I0123 13:44:56.627040 1 gather_cluster_operator_pods_and_events.go:233] Maximum buffer size: 932067 bytes I0123 13:44:56.627183 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for console-operator container console-operator-5f9f4b9bd7-xfn94 pod in namespace openshift-console-operator (previous: true). I0123 13:44:56.847237 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for console-operator container console-operator-5f9f4b9bd7-xfn94 pod in namespace openshift-console-operator (previous: false). I0123 13:44:57.033026 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for console container console-769f68ddb5-64tgv pod in namespace openshift-console (previous: false). I0123 13:44:57.036105 1 gather_cluster_operators.go:184] Unable to get configs.samples.operator.openshift.io resource due to: configs.samples.operator.openshift.io "cluster" not found I0123 13:44:57.230732 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for console container console-769f68ddb5-f8h75 pod in namespace openshift-console (previous: false). W0123 13:44:57.398903 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0123 13:44:57.454481 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for console container console-76c8567bd7-rm2wl pod in namespace openshift-console (previous: false). I0123 13:44:57.639439 1 tasks_processing.go:74] worker 59 stopped. I0123 13:44:57.639541 1 recorder.go:75] Recording config/clusteroperator/console with fingerprint=17341db379b2871a3f65740cd1365bfc0bf275371b09d41b58a30d8a5d972a9e I0123 13:44:57.639582 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/console/cluster with fingerprint=777490e650fc4f269d89afce0970d1e9b5dd10872e645f48c06497f314de30f6 I0123 13:44:57.639601 1 recorder.go:75] Recording config/clusteroperator/csi-snapshot-controller with fingerprint=615f17994ddf03a1ea5cbe5840d7f6d4d55893b6adce8ad01ea1e88b5c7bc22d I0123 13:44:57.639608 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/csisnapshotcontroller/cluster with fingerprint=5adc514f4b63e2f1ecc68bf6f9c0af70c5eea04522a49524e102721b1c41f80e I0123 13:44:57.639625 1 recorder.go:75] Recording config/clusteroperator/dns with fingerprint=53e011df572816ab1ea623ab5ba5d1a0402a63052b2033b605d88e4b818345fb I0123 13:44:57.639639 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/dns/default with fingerprint=9e7b4ce029030d3d8c3b49af92c556acdcc415000b40d3f969dbdc42c432b47f I0123 13:44:57.639656 1 recorder.go:75] Recording config/clusteroperator/image-registry with fingerprint=bd7e3cc1d1865b7a7c3b3f9ef01c2edea450918ae7076239a31c0941bf39cd63 I0123 13:44:57.639673 1 recorder.go:75] Recording config/clusteroperator/ingress with fingerprint=08b544c47c919fec4d6915b23fb3e48f90970da2f34cced2b8557b07715c64c7 I0123 13:44:57.639694 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/ingresscontroller/openshift-ingress-operator/default with fingerprint=39a70ffa65fbf927b159475c09091402d03faba9777659a29f56b6715bec430d I0123 13:44:57.639718 1 recorder.go:75] Recording config/clusteroperator/insights with fingerprint=6f84dc62160a55f83c58c86c329b5c314645db1645467818a6429f51b8f5e8ad I0123 13:44:57.639946 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/insightsoperator/cluster with fingerprint=34cc110fa7e26165c820edbb009b12813ed0e3e599371df791325dead205a0b5 I0123 13:44:57.639962 1 recorder.go:75] Recording config/clusteroperator/kube-apiserver with fingerprint=bce53a8a61940b53c3a04bd9817a0ea4f6ec9d55c94a6f70752be0dc6b20d2b4 I0123 13:44:57.639971 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubeapiserver/cluster with fingerprint=51503bf0b784fcf65ea46bcaf1f72ac1a5c4d5dc211934f18f27871efed05762 I0123 13:44:57.639983 1 recorder.go:75] Recording config/clusteroperator/kube-controller-manager with fingerprint=995013cab104433a55357d7e78ea8348f9bdb9207b0e0e5b5a2fb80cec0d6f02 I0123 13:44:57.639991 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubecontrollermanager/cluster with fingerprint=ce90c0d4f367d7da085074268031798382ae7c54fdcb0a21f15a4818fe308c11 I0123 13:44:57.640001 1 recorder.go:75] Recording config/clusteroperator/kube-scheduler with fingerprint=311aae3c12dac485664f3b6c62526a50b9a3bd4215ffc8d30140cc04ba596f19 I0123 13:44:57.640011 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubescheduler/cluster with fingerprint=f2940fb9fd20c19951dfc295eb363b7fba0c505f5ae61f01967a063099e6b60a I0123 13:44:57.640027 1 recorder.go:75] Recording config/clusteroperator/kube-storage-version-migrator with fingerprint=d2854bb78a5aa4a0a1e6f86e1c7136fe7c6563cada86e5d0a0403a261db80dc2 I0123 13:44:57.640035 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/kubestorageversionmigrator/cluster with fingerprint=9351181aa7e6ada41ef581ab31e13516c6b934cc95710154bafb2eb222cb58db I0123 13:44:57.640047 1 recorder.go:75] Recording config/clusteroperator/monitoring with fingerprint=937b7723d9950d1ed1fc44945e4b2bf6e44efbe35978c4bae8ff0dd75d3d8487 I0123 13:44:57.640099 1 recorder.go:75] Recording config/clusteroperator/network with fingerprint=0bbe31db760515193bf0c62ae5c1bb41e4e4f5ac772beed564546aed83bbf014 I0123 13:44:57.640107 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/ovn with fingerprint=626a89d20e0deaed5b6dfb533acfe65f4bb1618bd200a703b62e60c5d16d94ab I0123 13:44:57.640113 1 recorder.go:75] Recording config/clusteroperator/network.operator.openshift.io/operatorpki/openshift-ovn-kubernetes/signer with fingerprint=90410b16914712b85b3c4578716ad8c0ae072e688f4cd1e022bf76f20da3506d I0123 13:44:57.640129 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/network/cluster with fingerprint=a93d15eaecb455a0e40ecb2826eeecc1533899204ddd3c3921d15ab70af7ae75 I0123 13:44:57.640143 1 recorder.go:75] Recording config/clusteroperator/node-tuning with fingerprint=8290b00377930628e45e9ea35444d2f29402302b08ab9d8eef58ac03a5269d61 I0123 13:44:57.640158 1 recorder.go:75] Recording config/clusteroperator/openshift-apiserver with fingerprint=71e64ad3c3a89fccb21ea6a3f3403ac22019f4a6385c1442c8191650674fd376 I0123 13:44:57.640166 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftapiserver/cluster with fingerprint=e712e6cf27339b441e4ed1f4cde91dbde7e952698ba93407e4457db63a4a4c76 I0123 13:44:57.640175 1 recorder.go:75] Recording config/clusteroperator/openshift-controller-manager with fingerprint=145c7c3281066465b54560426ff7784ea8433af714603c496f4d9eae747048b4 I0123 13:44:57.640183 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/openshiftcontrollermanager/cluster with fingerprint=d71a0f4672f9b45d9fc8293bf1687afc650fd28d32e2e30de27523fe7b4eadf7 I0123 13:44:57.640191 1 recorder.go:75] Recording config/clusteroperator/openshift-samples with fingerprint=01631210c8af703368d2424b4d0c3f9a4c86014b6e009740d5c2f320caa73faa I0123 13:44:57.640201 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager with fingerprint=2d33683b01ace8e787df11a6a06258d29919d9cdd77b9a50cf59e3764d5aad02 I0123 13:44:57.640209 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-catalog with fingerprint=d03d1e6a99b8e1d7d6069a1dae7d6808c11f5780bd617b485c2d2843e3be79fd I0123 13:44:57.640220 1 recorder.go:75] Recording config/clusteroperator/operator-lifecycle-manager-packageserver with fingerprint=c14f065e4a33c494c84f50a75448fdab03a523a061194860c01b40fa626e2a25 I0123 13:44:57.640235 1 recorder.go:75] Recording config/clusteroperator/service-ca with fingerprint=e8a98df451ca2514013ccf9eefe2bbe1f164efd1ce786bf0f44756369999b3be I0123 13:44:57.640243 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/serviceca/cluster with fingerprint=812f7edc2cdb30e61e7f2b29454357a40b1a507a4b0c2b7729193b67f0e3b4aa I0123 13:44:57.640257 1 recorder.go:75] Recording config/clusteroperator/storage with fingerprint=f0b31a44c3d9dce8943fd7ef8976c6f98341958fe1d446658d55a1bc6127c150 I0123 13:44:57.640270 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/clustercsidriver/ebs.csi.aws.com with fingerprint=7e1ab8f8cfcd9d249b5b213939fe5144bb83db3725475461728bea44a002c3be I0123 13:44:57.640278 1 recorder.go:75] Recording config/clusteroperator/operator.openshift.io/storage/cluster with fingerprint=8e480f8c1ce1b39baac42d8ec780c57c2592929ae0c801b61ffad49ba13f33ad I0123 13:44:57.640289 1 gather.go:180] gatherer "clusterconfig" function "operators" took 2.268260856s to process 38 records I0123 13:44:57.649881 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for download-server container downloads-7ff44bbf7d-6tvwg pod in namespace openshift-console (previous: false). I0123 13:44:57.832729 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for download-server container downloads-7ff44bbf7d-zlx4q pod in namespace openshift-console (previous: false). I0123 13:44:58.033286 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-2vfgx pod in namespace openshift-dns (previous: false). I0123 13:44:58.231041 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-2vfgx pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-2vfgx\" is waiting to start: ContainerCreating" I0123 13:44:58.231058 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-2vfgx\" is waiting to start: ContainerCreating" I0123 13:44:58.231067 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-2vfgx pod in namespace openshift-dns (previous: false). W0123 13:44:58.398844 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0123 13:44:58.431196 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-2vfgx pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-2vfgx\" is waiting to start: ContainerCreating" I0123 13:44:58.431213 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-2vfgx\" is waiting to start: ContainerCreating" I0123 13:44:58.431239 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-9dp5z pod in namespace openshift-dns (previous: false). I0123 13:44:58.632517 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9dp5z pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-9dp5z\" is waiting to start: ContainerCreating" I0123 13:44:58.632533 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-9dp5z\" is waiting to start: ContainerCreating" I0123 13:44:58.632540 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-9dp5z pod in namespace openshift-dns (previous: false). I0123 13:44:58.832035 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-9dp5z pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-9dp5z\" is waiting to start: ContainerCreating" I0123 13:44:58.832055 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-9dp5z\" is waiting to start: ContainerCreating" I0123 13:44:58.832080 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns container dns-default-hcdcx pod in namespace openshift-dns (previous: false). I0123 13:44:59.033667 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-hcdcx pod in namespace openshift-dns for failing operator dns (previous: false): "container \"dns\" in pod \"dns-default-hcdcx\" is waiting to start: ContainerCreating" I0123 13:44:59.033684 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"dns\" in pod \"dns-default-hcdcx\" is waiting to start: ContainerCreating" I0123 13:44:59.033690 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for kube-rbac-proxy container dns-default-hcdcx pod in namespace openshift-dns (previous: false). I0123 13:44:59.233151 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for dns-default-hcdcx pod in namespace openshift-dns for failing operator kube-rbac-proxy (previous: false): "container \"kube-rbac-proxy\" in pod \"dns-default-hcdcx\" is waiting to start: ContainerCreating" I0123 13:44:59.233173 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"kube-rbac-proxy\" in pod \"dns-default-hcdcx\" is waiting to start: ContainerCreating" I0123 13:44:59.233183 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-5x6hj pod in namespace openshift-dns (previous: false). W0123 13:44:59.399433 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. I0123 13:44:59.432352 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0123 13:44:59.432371 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-ptth5 pod in namespace openshift-dns (previous: false). I0123 13:44:59.634066 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0123 13:44:59.634085 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for dns-node-resolver container node-resolver-qb2vc pod in namespace openshift-dns (previous: false). I0123 13:44:59.830395 1 gather_cluster_operator_pods_and_events.go:278] Error: "log buffer is empty" I0123 13:44:59.830436 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-54bc685868-cb72c pod in namespace openshift-image-registry (previous: true). I0123 13:45:00.031312 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-54bc685868-cb72c pod in namespace openshift-image-registry (previous: false). I0123 13:45:00.231512 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-864cb68768-6w5nj pod in namespace openshift-image-registry (previous: false). W0123 13:45:00.399345 1 gather_dvo_metrics.go:210] Failed to read the DVO metrics. Trying again. W0123 13:45:00.399384 1 gather_dvo_metrics.go:117] Unable to read metrics from endpoint "http://deployment-validation-operator-metrics.openshift-deployment-validation-operator.svc:8383": DVO metrics service was not available within the 5s timeout: context deadline exceeded I0123 13:45:00.399398 1 tasks_processing.go:74] worker 57 stopped. E0123 13:45:00.399408 1 gather.go:143] gatherer "clusterconfig" function "dvo_metrics" failed with the error: DVO metrics service was not available within the 5s timeout: context deadline exceeded I0123 13:45:00.399416 1 recorder.go:75] Recording config/dvo_metrics with fingerprint=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 I0123 13:45:00.399428 1 gather.go:180] gatherer "clusterconfig" function "dvo_metrics" took 5.028047672s to process 1 records I0123 13:45:00.435174 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for registry container image-registry-864cb68768-bmj6g pod in namespace openshift-image-registry (previous: false). I0123 13:45:00.633482 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-hr2c4 pod in namespace openshift-image-registry (previous: false). I0123 13:45:00.834898 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-lnc9p pod in namespace openshift-image-registry (previous: false). I0123 13:45:01.033899 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for node-ca container node-ca-znjxv pod in namespace openshift-image-registry (previous: false). I0123 13:45:01.231319 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-57c786b755-p2c5m pod in namespace openshift-ingress (previous: false). I0123 13:45:01.435119 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-67fdd869d6-cwpws pod in namespace openshift-ingress (previous: false). I0123 13:45:01.631622 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for router container router-default-67fdd869d6-kjmjx pod in namespace openshift-ingress (previous: false). I0123 13:45:01.835038 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-hq5d8 pod in namespace openshift-ingress-canary (previous: false). I0123 13:45:02.032332 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-hq5d8 pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-hq5d8\" is waiting to start: ContainerCreating" I0123 13:45:02.032350 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-hq5d8\" is waiting to start: ContainerCreating" I0123 13:45:02.032374 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-hvslg pod in namespace openshift-ingress-canary (previous: false). I0123 13:45:02.234150 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-hvslg pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-hvslg\" is waiting to start: ContainerCreating" I0123 13:45:02.234168 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-hvslg\" is waiting to start: ContainerCreating" I0123 13:45:02.234197 1 gather_cluster_operator_pods_and_events.go:363] Fetching logs for serve-healthcheck-canary container ingress-canary-qrp6q pod in namespace openshift-ingress-canary (previous: false). I0123 13:45:02.431390 1 gather_cluster_operator_pods_and_events.go:406] Failed to fetch log for ingress-canary-qrp6q pod in namespace openshift-ingress-canary for failing operator serve-healthcheck-canary (previous: false): "container \"serve-healthcheck-canary\" in pod \"ingress-canary-qrp6q\" is waiting to start: ContainerCreating" I0123 13:45:02.431408 1 gather_cluster_operator_pods_and_events.go:278] Error: "container \"serve-healthcheck-canary\" in pod \"ingress-canary-qrp6q\" is waiting to start: ContainerCreating" I0123 13:45:02.431426 1 tasks_processing.go:74] worker 54 stopped. I0123 13:45:02.431552 1 recorder.go:75] Recording events/openshift-console-operator with fingerprint=7c29a32e46f15b3c56f6cd0e9ed56689812e2dab273e096d109133e39b22d783 I0123 13:45:02.431617 1 recorder.go:75] Recording events/openshift-console with fingerprint=6048500181e14a3f990e6323cc56976d5b565e9388188c63b2e005a984deee12 I0123 13:45:02.431633 1 recorder.go:75] Recording events/openshift-dns-operator with fingerprint=1fedee81f8b94ff3f2c82374ea2b7274a65e9ace176d4ed62f8d787b91a89478 I0123 13:45:02.431662 1 recorder.go:75] Recording events/openshift-dns with fingerprint=f6d5dc52cbac4be2f598b27ecc1a9b4ef51cb03b3d2c165ec452fd4efd691f46 I0123 13:45:02.431779 1 recorder.go:75] Recording events/openshift-image-registry with fingerprint=0498686949cd3645752c2cc7af0c944bf647c5353553d926bffbcb6630c0cc08 I0123 13:45:02.431796 1 recorder.go:75] Recording events/openshift-ingress-operator with fingerprint=f2be21509a4ddbf4ba7e94fca7536af3be0ae38a65aa2abb70a91a7468563012 I0123 13:45:02.431839 1 recorder.go:75] Recording events/openshift-ingress with fingerprint=b0407d6988cbe98b8bb75d0225559c134439cd525452baf4a1c7140c8fe1c96a I0123 13:45:02.431890 1 recorder.go:75] Recording events/openshift-ingress-canary with fingerprint=9b9decf6fcff62e2f9924662daac88f5bb542141e7fae41319446f3fb18be4c6 I0123 13:45:02.431984 1 recorder.go:75] Recording config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94 with fingerprint=f2c7b8778fe3e5854a64f8485696984d3b656aacf578883589c5742a92caf87c E0123 13:45:02.431998 1 gather.go:164] error recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" because of the error: the record with the same name "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" was already recorded and had the fingerprint "f2c7b8778fe3e5854a64f8485696984d3b656aacf578883589c5742a92caf87c", overwriting with the record having fingerprint "f2c7b8778fe3e5854a64f8485696984d3b656aacf578883589c5742a92caf87c" W0123 13:45:02.432010 1 gather.go:158] issue recording gatherer "clusterconfig" function "operators_pods_and_events" result "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" because of the warning: warning: the record with the same fingerprint "f2c7b8778fe3e5854a64f8485696984d3b656aacf578883589c5742a92caf87c" was already recorded at path "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json", recording another one with a different path "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" I0123 13:45:02.432019 1 recorder.go:75] Recording config/pod/openshift-console-operator/logs/console-operator-5f9f4b9bd7-xfn94/console-operator_previous.log with fingerprint=2d1c24fcdfb984b61eba225f5edba5ef1ca46855d240a85d09816836530d53d0 I0123 13:45:02.432038 1 recorder.go:75] Recording config/pod/openshift-console-operator/logs/console-operator-5f9f4b9bd7-xfn94/console-operator_current.log with fingerprint=533e051a595f74741254ab1891141f99ce6d3ddf73ecd029e0a5c0ae411d16a7 I0123 13:45:02.432045 1 recorder.go:75] Recording config/pod/openshift-console/logs/console-769f68ddb5-64tgv/console_current.log with fingerprint=65d077418b67ebe74d327b1b433aaf3e22db765090be1c1ee1e0ec29f5180fc7 I0123 13:45:02.432049 1 recorder.go:75] Recording config/pod/openshift-console/logs/console-769f68ddb5-f8h75/console_current.log with fingerprint=3d347bf51c40c80bdcdfbc5413e81a99d2f878bd8010522255451d162560e9e1 I0123 13:45:02.432052 1 recorder.go:75] Recording config/pod/openshift-console/logs/console-76c8567bd7-rm2wl/console_current.log with fingerprint=17e12e987454ee33f8620e5bcac02b3f92554fd4f6f448c07ffb37289ca0dc84 I0123 13:45:02.432056 1 recorder.go:75] Recording config/pod/openshift-console/logs/downloads-7ff44bbf7d-6tvwg/download-server_current.log with fingerprint=52473a000187cb850f61be2df0b1973ffda86cd0a4e46d6ecf2c750842e5d6a0 I0123 13:45:02.432060 1 recorder.go:75] Recording config/pod/openshift-console/logs/downloads-7ff44bbf7d-zlx4q/download-server_current.log with fingerprint=f792ace03c742851e4f7e15b5248faecc5f8863052a55306fd464a7015041da4 I0123 13:45:02.432114 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-2vfgx with fingerprint=501236cb554ec4244bbde5240d8b39bf4d95f0e2f6172ea3a4a6f82cf6dd65d7 I0123 13:45:02.432153 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-9dp5z with fingerprint=5e02c909e18ab4fab153eb549a5ea58556d9e88b809dd1fb8e695ac5166e5389 I0123 13:45:02.432190 1 recorder.go:75] Recording config/pod/openshift-dns/dns-default-hcdcx with fingerprint=38dccbe3eaa00af9884b0c9a469969b2b0591d65c33fe0af92854dcee3671b0e I0123 13:45:02.432254 1 recorder.go:75] Recording config/pod/openshift-image-registry/image-registry-54bc685868-cb72c with fingerprint=b12b46315a93f61a438ecd0af260eb674e422da74de5781e70c2863fc4917501 I0123 13:45:02.432269 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/image-registry-54bc685868-cb72c/registry_previous.log with fingerprint=6c3f51a2e0f063f3810595bcc701c1781e2f836efb211c378a05adca8ce891b5 I0123 13:45:02.432281 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/image-registry-54bc685868-cb72c/registry_current.log with fingerprint=c11f8b1df8c04a50cd8644958a6a6ab087ceefb6c506005f79db7868ffbb067e I0123 13:45:02.432294 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/image-registry-864cb68768-6w5nj/registry_current.log with fingerprint=1965eae3915376dfdfe4c46a7f18b70bb6b758ef8ffe989c97cc34ce238c82c6 I0123 13:45:02.432301 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/image-registry-864cb68768-bmj6g/registry_current.log with fingerprint=93107e23bedf34f0ac85614bbebbcb5353ded60cf43acf7a72694ae7d917ac87 I0123 13:45:02.432305 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/node-ca-hr2c4/node-ca_current.log with fingerprint=b23448b2a7300926bd0fc2ecb3a69adc2da8722cf3c83a2bae1c204249271c73 I0123 13:45:02.432308 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/node-ca-lnc9p/node-ca_current.log with fingerprint=1b3755004a3667cc224800d6a68ceb102f4d383524f4da667098f6a54351fa8a I0123 13:45:02.432311 1 recorder.go:75] Recording config/pod/openshift-image-registry/logs/node-ca-znjxv/node-ca_current.log with fingerprint=472582f7825ac4c00b31905d7ad3b792da5eb2b84ae58d06db3efe1a6536b531 I0123 13:45:02.432317 1 recorder.go:75] Recording config/pod/openshift-ingress/logs/router-default-57c786b755-p2c5m/router_current.log with fingerprint=f81a871e555509341579bf412ad9b82824eb92f7ee9fe70ff07068894d0c1009 I0123 13:45:02.432327 1 recorder.go:75] Recording config/pod/openshift-ingress/logs/router-default-67fdd869d6-cwpws/router_current.log with fingerprint=cdbcb715521abb6cd313e22674c6d152faa045bb059df580eda7897b9e4b68d5 I0123 13:45:02.432333 1 recorder.go:75] Recording config/pod/openshift-ingress/logs/router-default-67fdd869d6-kjmjx/router_current.log with fingerprint=52cbdc6f61e587726404ac93e35cae0cfb55abc3d5089fd25650b4e55dd6726c I0123 13:45:02.432365 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-hq5d8 with fingerprint=ed4aafd980a25173d9200afcc3c3f2f18b18944b81e1109f803f5e7d006fb62e I0123 13:45:02.432397 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-hvslg with fingerprint=1cfe4ca747d3fe5b0b21e21202e0d8e372b38c05877db82717787d3c4b8658f4 I0123 13:45:02.432427 1 recorder.go:75] Recording config/pod/openshift-ingress-canary/ingress-canary-qrp6q with fingerprint=6599534aaadef6bbe9654865ba6702859d1b8387f7c4e01f949996fb02b25254 I0123 13:45:02.432434 1 gather.go:180] gatherer "clusterconfig" function "operators_pods_and_events" took 7.059612586s to process 33 records I0123 13:45:07.996664 1 tasks_processing.go:74] worker 38 stopped. I0123 13:45:07.996700 1 recorder.go:75] Recording config/installplans with fingerprint=b6ae0e2549358513c087729c711e8e1ad6f2144adc0ffa716b1a475ed1e6ddde I0123 13:45:07.996710 1 gather.go:180] gatherer "clusterconfig" function "install_plans" took 12.625667566s to process 1 records I0123 13:45:08.783202 1 tasks_processing.go:74] worker 62 stopped. I0123 13:45:08.783414 1 recorder.go:75] Recording config/serviceaccounts with fingerprint=721fe03c835269cb27c55d13a9ee44252a63230f386c1d6dda3910788c1fda76 I0123 13:45:08.783429 1 gather.go:180] gatherer "clusterconfig" function "service_accounts" took 13.406943892s to process 1 records E0123 13:45:08.783493 1 periodic.go:252] clusterconfig failed after 13.413s with: function "tsdb_status" failed with an error, function "metrics" failed with an error, function "active_alerts" failed with an error, function "silenced_alerts" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error, unable to record function "operators_pods_and_events" record "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" I0123 13:45:08.783507 1 controllerstatus.go:89] name=periodic-clusterconfig healthy=false reason=PeriodicGatherFailed message=Source clusterconfig could not be retrieved: function "tsdb_status" failed with an error, function "metrics" failed with an error, function "active_alerts" failed with an error, function "silenced_alerts" failed with an error, function "support_secret" failed with an error, function "machine_healthchecks" failed with an error, function "pod_network_connectivity_checks" failed with an error, function "machines" failed with an error, function "config_maps" failed with an error, function "ingress_certificates" failed with an error, function "dvo_metrics" failed with an error, unable to record function "operators_pods_and_events" record "config/pod/openshift-console-operator/console-operator-5f9f4b9bd7-xfn94.json" I0123 13:45:08.783515 1 periodic.go:214] Running workloads gatherer I0123 13:45:08.783527 1 tasks_processing.go:45] number of workers: 2 I0123 13:45:08.783535 1 tasks_processing.go:69] worker 1 listening for tasks. I0123 13:45:08.783538 1 tasks_processing.go:71] worker 1 working on workload_info task. I0123 13:45:08.783544 1 tasks_processing.go:69] worker 0 listening for tasks. I0123 13:45:08.783617 1 tasks_processing.go:71] worker 0 working on helmchart_info task. I0123 13:45:08.814170 1 gather_workloads_info.go:257] Loaded pods in 0s, will wait 24s for image data I0123 13:45:08.828099 1 tasks_processing.go:74] worker 0 stopped. I0123 13:45:08.828121 1 gather.go:180] gatherer "workloads" function "helmchart_info" took 44.466103ms to process 0 records I0123 13:45:08.841312 1 gather_workloads_info.go:366] No image sha256:02a8a48acfb53288abab0156ec7ac9b22db182a8235bd4b2e28bedc4115f45a3 (28ms) I0123 13:45:08.850591 1 gather_workloads_info.go:366] No image sha256:077f17999390936514bfd10cc32d390e768acaf4f741cbe67ce27028076f6acd (9ms) I0123 13:45:08.858972 1 gather_workloads_info.go:366] No image sha256:1f573f9b17b17ec63a1f11676d49d3c34d142ede6b39f051e0e067ea851bf7c3 (8ms) I0123 13:45:08.867409 1 gather_workloads_info.go:366] No image sha256:54ec8cf3caff13d59ad8af445d8847b88d60483d23a065fb7bb23f80944ed4f4 (8ms) I0123 13:45:08.875916 1 gather_workloads_info.go:366] No image sha256:95ad1dc027a370c2a0fddd6ee5db1d1045b01a94fcb823e15b0772fdb350364f (8ms) I0123 13:45:08.884349 1 gather_workloads_info.go:366] No image sha256:4609bb8f8079ff4735236bb03b0c3134bff9e702bf0d620a1b4dfeeb65a02679 (8ms) I0123 13:45:08.892926 1 gather_workloads_info.go:366] No image sha256:71c0c6321ad668efeaf6aca404c04aa2689c2d607fe2e0eb0eb165a058a22a99 (9ms) I0123 13:45:08.901436 1 gather_workloads_info.go:366] No image sha256:f82357030795138d2081ecc5172092222b0f4faea27e9a7a0474fbeae29111ad (8ms) I0123 13:45:08.909981 1 gather_workloads_info.go:366] No image sha256:1514f7186f37284593020a1cec45ac8c1994708c02a932457638e16ccd890dc5 (9ms) I0123 13:45:08.918813 1 gather_workloads_info.go:366] No image sha256:342e71f7347a153f03827c1a530d2ba9d8b293004118d7f38768c70d34d20e85 (9ms) I0123 13:45:08.927278 1 gather_workloads_info.go:366] No image sha256:1d5f973fb0784d3354d0d89f031223c341987242f67d694573fecc6d8bec248a (8ms) I0123 13:45:09.023321 1 gather_workloads_info.go:366] No image sha256:608adc48f07c0361b20abd586632ea17a3e1f2cde6c34bf834b4c5c92559990a (96ms) I0123 13:45:09.123503 1 gather_workloads_info.go:366] No image sha256:32a3466963391e7d97acd87580c7bceef0970c566ae1d3a306aabe4d1cd0649d (100ms) I0123 13:45:09.223887 1 gather_workloads_info.go:366] No image sha256:b28b7e755afc8a13489a4e4db3f57a6fa443900c21d94c02546397295ec0a0f0 (100ms) I0123 13:45:09.324205 1 gather_workloads_info.go:366] No image sha256:d822628f7adba8fc05814c6fe01e420596f1199dec359856670cd59966cfc798 (100ms) I0123 13:45:09.424079 1 gather_workloads_info.go:366] No image sha256:aefb66851fe643dceab88ecd95095a2b92a0e9847fa6014400f8fec2b28d4d55 (100ms) I0123 13:45:09.524599 1 gather_workloads_info.go:366] No image sha256:73c0e16535bbd28dbe7d195c0beb1bb45701fbea68cbeccdb836ccb76da59913 (100ms) I0123 13:45:09.623864 1 gather_workloads_info.go:366] No image sha256:5e6aa22ba358b73421fac0c908b4745cee7b67229ec82b74657fe05019e006f6 (99ms) I0123 13:45:09.723729 1 gather_workloads_info.go:366] No image sha256:e4e13bd055c2792cce014a67441618608bd2f8da836aeffa5288af57c8da14cf (100ms) I0123 13:45:09.825221 1 gather_workloads_info.go:366] No image sha256:a2ffa20892d9987678b090b3c8f273c46b6133b20d339a9eca7427d5110955ae (101ms) I0123 13:45:09.923781 1 gather_workloads_info.go:366] No image sha256:aabdcc99cd007c9ced14abf3cba941e14549ecc711336c68a0b73bc81945099e (99ms) I0123 13:45:10.024083 1 gather_workloads_info.go:366] No image sha256:4ebb3521426c2ab4ccb6ea1300a8365388419d17ded5e34a43c9335adbaf7e09 (100ms) I0123 13:45:10.124238 1 gather_workloads_info.go:366] No image sha256:64e0441a791633ae2632abcdaabe0284a74409c6b4bad17fc1953f13a42a5891 (100ms) I0123 13:45:10.223353 1 gather_workloads_info.go:366] No image sha256:6b5d6ba34cc5f8c2501590e105a4325a5fdebaba4d4057594d7e02a093196d2a (99ms) I0123 13:45:10.323266 1 gather_workloads_info.go:366] No image sha256:282d63d13dee07099dbd7aa25de771cdb58a6c6f3bd7b2313d5940f714f86533 (100ms) I0123 13:45:10.424073 1 gather_workloads_info.go:366] No image sha256:371e4ee4154fbaee47ed13fe22ddbe98f8f59be04faf0647c4966619715eb689 (101ms) I0123 13:45:10.524515 1 gather_workloads_info.go:366] No image sha256:5ed50e2beaa850786739d9d1b7ef711b94c583a5d0e474e5643b71da0261b8e1 (100ms) I0123 13:45:10.623892 1 gather_workloads_info.go:366] No image sha256:00fac240c3a3c5c1353da18af3c69789e9ec6429d0e38693974762020fd664f6 (99ms) I0123 13:45:10.723873 1 gather_workloads_info.go:366] No image sha256:35dbab26d847f9d89dc5ad5c9750a5960a3fd982c01c67cda0e1ec5f53793cdb (100ms) I0123 13:45:10.824209 1 gather_workloads_info.go:366] No image sha256:38c960041cf608ac4ae3af0840fd0da90eb5623af5f187718ce91b53347f08f6 (100ms) I0123 13:45:10.923357 1 gather_workloads_info.go:366] No image sha256:023cf36b1efe4b343dfcb26c43fd2984083f4532c6ebb1f4b7418a6e282e6f01 (99ms) I0123 13:45:11.023232 1 gather_workloads_info.go:366] No image sha256:51b8df7b98d58a2c3c287ad7df597c5e2bb98b287c551c73a38c2eae148727d3 (100ms) I0123 13:45:11.123282 1 gather_workloads_info.go:366] No image sha256:1fffd878e84b1b03551a9844918bc6cde84edfae00d89aae488a5d5b15191f3a (100ms) I0123 13:45:11.223408 1 gather_workloads_info.go:366] No image sha256:31f7f7e771386f6aee0de3f2cb618052dcaaa43de0be0ab716e695e34284eb49 (100ms) I0123 13:45:11.323598 1 gather_workloads_info.go:366] No image sha256:9649c4afbcc6b2638e3c9df099e9651f174b0dbfe86e0132990b3f3652a49ad6 (100ms) I0123 13:45:11.424188 1 gather_workloads_info.go:366] No image sha256:63cfee5f7fe6def42b51567d0552ece1dc4ca04f73e33cb56572a603b5d649c1 (101ms) I0123 13:45:11.523193 1 gather_workloads_info.go:366] No image sha256:7cbb8baf947d7790a3cdab0a0e7687132dbd91f95aaa601ab6c4f9e85596c4ed (99ms) I0123 13:45:11.623127 1 gather_workloads_info.go:366] No image sha256:91b0eabeeaa1f175029df31a2b069b7ad3e68987576d798fa3539e82fc3ae4e6 (100ms) I0123 13:45:11.724379 1 gather_workloads_info.go:366] No image sha256:b60ff23907cf3f486ba0438a907079bd110d924709b84a6cb337a38b521b178c (101ms) I0123 13:45:11.824309 1 gather_workloads_info.go:366] No image sha256:0dc1caf37a8313abe8b4e2712cf91a5aa72cef6c9ab3f893fb9ed94f4f61feac (100ms) I0123 13:45:11.924102 1 gather_workloads_info.go:366] No image sha256:d6732d5a5163b0ae8ce006357f139bf8d06d323990a83b7aea9480a6e48ad8f4 (100ms) I0123 13:45:12.024208 1 gather_workloads_info.go:366] No image sha256:6e90e5b6589dfbe3f2f1ebb147f68e13137a43b8d735f5775a92c70b34373f42 (100ms) I0123 13:45:12.123530 1 gather_workloads_info.go:366] No image sha256:f67e4866e40537429b4fb7267c3235354fb6e3c93858ccaad6cc3334320ea73a (99ms) I0123 13:45:12.223888 1 gather_workloads_info.go:366] No image sha256:1fb43537d620e21007cdedda6f575271019f74b473cbd33ec90763216025c68c (100ms) I0123 13:45:12.223916 1 tasks_processing.go:74] worker 1 stopped. I0123 13:45:12.224159 1 recorder.go:75] Recording config/workload_info with fingerprint=d73aa1e5c4bf9658bb1d414d90ed4c0b847a2dca79b3da605fd0ceeb1ee592c7 I0123 13:45:12.224178 1 gather.go:180] gatherer "workloads" function "workload_info" took 3.440373156s to process 1 records I0123 13:45:12.224190 1 periodic.go:261] Periodic gather workloads completed in 3.44s I0123 13:45:12.224196 1 controllerstatus.go:80] name=periodic-workloads healthy=true reason= message= I0123 13:45:12.224200 1 periodic.go:214] Running conditional gatherer I0123 13:45:12.230328 1 requests.go:282] Making HTTP GET request at: https://console.redhat.com/api/gathering/v2/4.17.46/gathering_rules I0123 13:45:12.235348 1 conditional_gatherer.go:107] Get "https://console.redhat.com/api/gathering/v2/4.17.46/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.8:37543->172.30.0.10:53: read: connection refused I0123 13:45:12.235637 1 conditional_gatherer.go:340] updating alerts cache for conditional gatherer E0123 13:45:12.238108 1 conditional_gatherer.go:326] unable to update alerts cache: Get "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?match%5B%5D=ALERTS%7Balertstate%3D%22firing%22%7D&query=ALERTS": dial tcp: lookup prometheus-k8s.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.130.0.8:35335->172.30.0.10:53: read: connection refused I0123 13:45:12.238167 1 conditional_gatherer.go:386] updating version cache for conditional gatherer I0123 13:45:12.244497 1 conditional_gatherer.go:394] cluster version is '4.17.46' E0123 13:45:12.244513 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244517 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244520 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244522 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244524 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244527 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244529 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244531 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing E0123 13:45:12.244534 1 conditional_gatherer.go:211] error checking conditions for a gathering rule: alerts cache is missing I0123 13:45:12.244548 1 tasks_processing.go:45] number of workers: 3 I0123 13:45:12.244572 1 tasks_processing.go:69] worker 0 listening for tasks. I0123 13:45:12.244581 1 tasks_processing.go:71] worker 0 working on conditional_gatherer_rules task. I0123 13:45:12.244587 1 tasks_processing.go:69] worker 2 listening for tasks. I0123 13:45:12.244590 1 tasks_processing.go:69] worker 1 listening for tasks. I0123 13:45:12.244593 1 tasks_processing.go:71] worker 2 working on remote_configuration task. I0123 13:45:12.244597 1 tasks_processing.go:74] worker 1 stopped. I0123 13:45:12.244616 1 tasks_processing.go:71] worker 0 working on rapid_container_logs task. I0123 13:45:12.244667 1 recorder.go:75] Recording insights-operator/conditional-gatherer-rules with fingerprint=7034af97e7e41c22e4b775abdd4b9066c8ebb19da33eb7f69f39bfd2eb5f6406 I0123 13:45:12.244679 1 gather.go:180] gatherer "conditional" function "conditional_gatherer_rules" took 3.193µs to process 1 records I0123 13:45:12.244702 1 recorder.go:75] Recording insights-operator/remote-configuration with fingerprint=0394430c431eec4d48bb1811a90918e95161d2282c59af26f2473613cc0959db I0123 13:45:12.244710 1 gather.go:180] gatherer "conditional" function "remote_configuration" took 727ns to process 1 records I0123 13:45:12.244715 1 tasks_processing.go:74] worker 2 stopped. I0123 13:45:12.244807 1 tasks_processing.go:74] worker 0 stopped. I0123 13:45:12.244817 1 gather.go:180] gatherer "conditional" function "rapid_container_logs" took 175.448µs to process 0 records I0123 13:45:12.244840 1 controllerstatus.go:89] name=periodic-conditional healthy=false reason=NotAvailable message=Get "https://console.redhat.com/api/gathering/v2/4.17.46/gathering_rules": dial tcp: lookup console.redhat.com on 172.30.0.10:53: read udp 10.130.0.8:37543->172.30.0.10:53: read: connection refused I0123 13:45:12.244851 1 recorder.go:75] Recording insights-operator/remote-configuration.json with fingerprint=359de9c990c741675cec72fda96b5c3682221efdb4799f5eaa6e9805bcd3b5c1 W0123 13:45:12.273017 1 gather.go:212] can't read cgroups memory usage data: open /sys/fs/cgroup/memory/memory.usage_in_bytes: no such file or directory I0123 13:45:12.273274 1 recorder.go:75] Recording insights-operator/gathers with fingerprint=a73d55a4336fc4d696bb2cc502193ac7f5cbdf66a9398615be440c4a9dd84db1 I0123 13:45:12.273660 1 diskrecorder.go:70] Writing 144 records to /var/lib/insights-operator/insights-2026-01-23-134512.tar.gz I0123 13:45:12.281501 1 diskrecorder.go:51] Wrote 144 records to disk in 7ms I0123 13:45:12.281543 1 periodic.go:283] Gathering cluster info every 2h0m0s I0123 13:45:12.281561 1 periodic.go:284] Configuration is dataReporting: interval: 2h0m0s, uploadEndpoint: https://console.redhat.com/api/ingress/v1/upload, storagePath: /var/lib/insights-operator, downloadEndpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/%s/reports, conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules, obfuscation: [] sca: disabled: false, endpoint: https://api.openshift.com/api/accounts_mgmt/v1/certificates, interval: 8h0m0s alerting: disabled: false clusterTransfer: endpoint: https://api.openshift.com/api/accounts_mgmt/v1/cluster_transfers/, interval: 12h0m0s proxy: httpProxy: , httpsProxy: , noProxy: I0123 13:46:25.371369 1 diskrecorder.go:170] Found files to send: insights-2026-01-23-134512.tar.gz I0123 13:46:25.371412 1 insightsuploader.go:150] Checking archives to upload periodically every 15m20.805797508s I0123 13:46:25.371422 1 insightsuploader.go:165] Uploading latest report since 0001-01-01T00:00:00Z I0123 13:46:25.380702 1 requests.go:47] Uploading application/vnd.redhat.openshift.periodic to https://console.redhat.com/api/ingress/v1/upload I0123 13:46:25.677988 1 requests.go:88] Successfully reported id=2026-01-23T13:46:25Z x-rh-insights-request-id=4e6f5c1007854815b30e63b9541cfe4d, wrote=70493 I0123 13:46:25.678053 1 insightsuploader.go:187] Uploaded report successfully in 306.622274ms I0123 13:46:25.678077 1 controller.go:119] Initializing last reported time to 2026-01-23T13:46:25Z I0123 13:46:25.678153 1 insightsreport.go:304] Archive uploaded, starting pulling report... I0123 13:46:25.678163 1 insightsreport.go:215] Starting retrieving report from Smart Proxy I0123 13:46:25.678171 1 insightsreport.go:221] Initial delay for pulling: 1m0s I0123 13:46:25.684874 1 controller.go:482] The operator is healthy I0123 13:46:55.385021 1 controller.go:482] The operator is healthy I0123 13:47:26.156056 1 insightsreport.go:137] Pulling report from smart-proxy I0123 13:47:26.156085 1 insightsreport.go:149] Retrieving report I0123 13:47:26.163494 1 requests.go:111] Retrieving report for cluster: e9b6cd5b-5da5-421d-ac5e-9d51bdc06ed6 I0123 13:47:26.163506 1 requests.go:112] Endpoint: https://console.redhat.com/api/insights-results-aggregator/v2/cluster/e9b6cd5b-5da5-421d-ac5e-9d51bdc06ed6/reports I0123 13:47:26.166820 1 requests.go:122] Retrieving report from https://console.redhat.com/api/insights-results-aggregator/v2/cluster/e9b6cd5b-5da5-421d-ac5e-9d51bdc06ed6/reports I0123 13:47:26.417812 1 insightsreport.go:184] Report retrieved I0123 13:47:26.471141 1 insightsreport.go:239] Report retrieved correctly I0123 13:48:55.385238 1 controller.go:482] The operator is healthy I0123 13:49:55.360572 1 secretconfigobserver.go:136] Refreshing configuration from cluster pull secret I0123 13:49:55.367468 1 secretconfigobserver.go:249] Found cloud.openshift.com token I0123 13:49:55.367489 1 secretconfigobserver.go:162] Refreshing configuration from cluster support secret I0123 13:49:55.372562 1 secretconfigobserver.go:119] support secret does not exist I0123 13:50:55.385517 1 controller.go:482] The operator is healthy